WorldWideScience

Sample records for large utility networks

  1. Lewis Research Center studies of multiple large wind turbine generators on a utility network

    Science.gov (United States)

    Gilbert, L. J.; Triezenberg, D. M.

    1979-01-01

    A NASA-Lewis program to study the anticipated performance of a wind turbine generator farm on an electric utility network is surveyed. The paper describes the approach of the Lewis Wind Energy Project Office to developing analysis capabilities in the area of wind turbine generator-utility network computer simulations. Attention is given to areas such as, the Lewis Purdue hybrid simulation, an independent stability study, DOE multiunit plant study, and the WEST simulator. Also covered are the Lewis mod-2 simulation including analog simulation of a two wind turbine system and comparison with Boeing simulation results, and gust response of a two machine model. Finally future work to be done is noted and it is concluded that the study shows little interaction between the generators and between the generators and the bus.

  2. Utility unbundling : large consumer's perspective

    International Nuclear Information System (INIS)

    Block, C.

    1997-01-01

    The perspectives of Sunoco as a large user of electric power on utility unbundling were presented. Sunoco's Sarnia refinery runs up an energy bill of over $60 million per year for electricity, natural gas (used both as a feedstock as well as a fuel), natural gas liquids and steam. As a large customer Sunoco advocates unbundling of all services, leaving only the 'pipes and wires' as true monopolies. In their view, regulation distorts the market place and prevents the lower prices that would result from competition as has been seen in the airline and telephone industries. Sunoco's expectation is that in the post-deregulated environment large and small consumers will have a choice of energy supplier, and large consumers will increasingly turn to co-generation as the most desirable way of meeting their power needs

  3. Network Bandwidth Utilization Forecast Model on High Bandwidth Network

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wucherl; Sim, Alex

    2014-07-07

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2percent. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  4. Network bandwidth utilization forecast model on high bandwidth networks

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Wuchert (William) [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sim, Alex [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-03-30

    With the increasing number of geographically distributed scientific collaborations and the scale of the data size growth, it has become more challenging for users to achieve the best possible network performance on a shared network. We have developed a forecast model to predict expected bandwidth utilization for high-bandwidth wide area network. The forecast model can improve the efficiency of resource utilization and scheduling data movements on high-bandwidth network to accommodate ever increasing data volume for large-scale scientific data applications. Univariate model is developed with STL and ARIMA on SNMP path utilization data. Compared with traditional approach such as Box-Jenkins methodology, our forecast model reduces computation time by 83.2%. It also shows resilience against abrupt network usage change. The accuracy of the forecast model is within the standard deviation of the monitored measurements.

  5. Utility applications and broadband networks

    Energy Technology Data Exchange (ETDEWEB)

    Chebra, R.; Taylor, P.

    2003-02-01

    A detailed analytical model of a cable network that would be capable of providing utilities with such services as automatic meter reading, on-line ability to remotely connect and disconnect commodity service, outage notification, tamper detection, direct utility-initiated load control, indirect user prescribed load control, and user access to energy consumption information, is described. The paper provides an overview of of the zones of focus that must be addressed -- market assessment, competitive analysis, product identification, economic model development, assessment of skill set requirements, performance monitoring and tracking, and various technical issues -- to identify any gaps in the organisation's ability to fully develop such a plan. Developers of the model field tested it in 1995 using some benchmarks that were available at that time, and found that the benefit afforded by direct labor saving was not sufficient to cover the capital expenditure of the advanced utility gateway connected to the cable network. However, since 1995 the unanticipated shift in the derived consumer value from a host of cable-based communications services has rendered these original projections irrelevant. Since national communications organizations concentrate on 'tier one' or at best 'tier two' cities (roughly corresponding to the NFL franchise cities and baseball farm team cities), the uncovered rural and suburban areas of the country create a significant digital divide within the population. The developers of the model contend that these unserviced areas provide utilities, especially municipal utilities, with an excellent opportunity to step into the gap and provide a full range of services that includes water, electricity and communications. The proposed model provides the foundation for utilities upon which to base their ultimate implementation decisions.

  6. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  7. Diagnosis method utilizing neural networks

    International Nuclear Information System (INIS)

    Watanabe, K.; Tamayama, K.

    1990-01-01

    Studies have been made on the technique of neural networks, which will be used to identify a cause of a small anomalous state in the reactor coolant system of the ATR (Advance Thermal Reactor). Three phases of analyses were carried out in this study. First, simulation for 100 seconds was made to determine how the plant parameters respond after the occurence of a transient decrease in reactivity, flow rate and temperature of feed water and increase in the steam flow rate and steam pressure, which would produce a decrease of water level in a steam drum of the ATR. Next, the simulation data was analysed utilizing an autoregressive model. From this analysis, a total of 36 coherency functions up to 0.5 Hz in each transient were computed among nine important and detectable plant parameters: neutron flux, flow rate of coolant, steam or feed water, water level in the steam drum, pressure and opening area of control valve in a steam pipe, feed water temperature and electrical power. Last, learning of neural networks composed of 96 input, 4-9 hidden and 5 output layer units was done by use of the generalized delta rule, namely a back-propagation algorithm. These convergent computations were continued as far as the difference between the desired outputs, 1 for direct cause or 0 for four other ones and actual outputs reached less than 10%. (1) Coherency functions were not governed by decreasing rate of reactivity in the range of 0.41x10 -2 dollar/s to 1.62x10 -2 dollar /s or by decreasing depth of the feed water temperature in the range of 3 deg C to 10 deg C or by a change of 10% or less in the three other causes. Change in coherency functions only depended on the type of cause. (2) The direct cause from the other four ones could be discriminated with 0.94+-0.01 of output level. A maximum of 0.06 output height was found among the other four causes. (3) Calculation load which is represented as products of learning times and numbers of the hidden units did not depend on the

  8. Mapping change in large networks.

    Directory of Open Access Journals (Sweden)

    Martin Rosvall

    2010-01-01

    Full Text Available Change is a fundamental ingredient of interaction patterns in biology, technology, the economy, and science itself: Interactions within and between organisms change; transportation patterns by air, land, and sea all change; the global financial flow changes; and the frontiers of scientific research change. Networks and clustering methods have become important tools to comprehend instances of these large-scale structures, but without methods to distinguish between real trends and noisy data, these approaches are not useful for studying how networks change. Only if we can assign significance to the partitioning of single networks can we distinguish meaningful structural changes from random fluctuations. Here we show that bootstrap resampling accompanied by significance clustering provides a solution to this problem. To connect changing structures with the changing function of networks, we highlight and summarize the significant structural changes with alluvial diagrams and realize de Solla Price's vision of mapping change in science: studying the citation pattern between about 7000 scientific journals over the past decade, we find that neuroscience has transformed from an interdisciplinary specialty to a mature and stand-alone discipline.

  9. Photon spectrometry utilizing neural networks

    International Nuclear Information System (INIS)

    Silveira, R.; Benevides, C.; Lima, F.; Vilela, E.

    2015-01-01

    Having in mind the time spent on the uneventful work of characterization of the radiation beams used in a ionizing radiation metrology laboratory, the Metrology Service of the Centro Regional de Ciencias Nucleares do Nordeste - CRCN-NE verified the applicability of artificial intelligence (artificial neural networks) to perform the spectrometry in photon fields. For this, was developed a multilayer neural network, as an application for the classification of patterns in energy, associated with a thermoluminescent dosimetric system (TLD-700 and TLD-600). A set of dosimeters was initially exposed to various well known medium energies, between 40 keV and 1.2 MeV, coinciding with the beams determined by ISO 4037 standard, for the dose of 10 mSv in the quantity Hp(10), on a chest phantom (ISO slab phantom) with the purpose of generating a set of training data for the neural network. Subsequently, a new set of dosimeters irradiated in unknown energies was presented to the network with the purpose to test the method. The methodology used in this work was suitable for application in the classification of energy beams, having obtained 100% of the classification performed. (authors)

  10. Risk measures on networks and expected utility

    International Nuclear Information System (INIS)

    Cerqueti, Roy; Lupi, Claudio

    2016-01-01

    In reliability theory projects are usually evaluated in terms of their riskiness, and often decision under risk is intended as the one-shot-type binary choice of accepting or not accepting the risk. In this paper we elaborate on the concept of risk acceptance, and propose a theoretical framework based on network theory. In doing this, we deal with system reliability, where the interconnections among the random quantities involved in the decision process are explicitly taken into account. Furthermore, we explore the conditions to be satisfied for risk-acceptance criteria to be consistent with the axiomatization of standard expected utility theory within the network framework. In accordance with existing literature, we show that a risk evaluation criterion can be meaningful even if it is not consistent with the standard axiomatization of expected utility, once this is suitably reinterpreted in the light of networks. Finally, we provide some illustrative examples. - Highlights: • We discuss risk acceptance and theoretically develop this theme on the basis of network theory. • We propose an original framework for describing the algebraic structure of the set of the networks, when they are viewed as risks. • We introduce the risk measures on networks, which induce total orders on the set of networks. • We state conditions on the risk measures on networks to let the induced risk-acceptance criterion be consistent with a new formulation of the expected utility theory.

  11. Large scale network-centric distributed systems

    CERN Document Server

    Sarbazi-Azad, Hamid

    2014-01-01

    A highly accessible reference offering a broad range of topics and insights on large scale network-centric distributed systems Evolving from the fields of high-performance computing and networking, large scale network-centric distributed systems continues to grow as one of the most important topics in computing and communication and many interdisciplinary areas. Dealing with both wired and wireless networks, this book focuses on the design and performance issues of such systems. Large Scale Network-Centric Distributed Systems provides in-depth coverage ranging from ground-level hardware issu

  12. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  13. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    Energy Technology Data Exchange (ETDEWEB)

    Egid, Adin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-11-01

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides a useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network.

  14. Transcriptional regulation of the carbohydrate utilization network in Thermotoga maritima

    Directory of Open Access Journals (Sweden)

    Dmitry A Rodionov

    2013-08-01

    Full Text Available Hyperthermophilic bacteria from the Thermotogales lineage can produce hydrogen by fermenting a wide range of carbohydrates. Previous experimental studies identified a large fraction of genes committed to carbohydrate degradation and utilization in the model bacterium Thermotoga maritima. Knowledge of these genes enabled comprehensive reconstruction of biochemical pathways comprising the carbohydrate utilization network. However, transcriptional factors (TFs and regulatory mechanisms driving this network remained largely unknown. Here, we used an integrated approach based on comparative analysis of genomic and transcriptomic data for the reconstruction of the carbohydrate utilization regulatory networks in 11 Thermotogales genomes. We identified DNA-binding motifs and regulons for 19 orthologous TFs in the Thermotogales. The inferred regulatory network in T. maritima contains 181 genes encoding TFs, sugar catabolic enzymes and ABC-family transporters. In contrast to many previously described bacteria, a transcriptional regulation strategy of Thermotoga does not employ global regulatory factors. The reconstructed regulatory network in T. maritima was validated by gene expression profiling on a panel of mono- and disaccharides and by in vitro DNA-binding assays. The observed upregulation of genes involved in catabolism of pectin, trehalose, cellobiose, arabinose, rhamnose, xylose, glucose, galactose, and ribose showed a strong correlation with the UxaR, TreR, BglR, CelR, AraR, RhaR, XylR, GluR, GalR, and RbsR regulons. Ultimately, this study elucidated the transcriptional regulatory network and mechanisms controlling expression of carbohydrate utilization genes in T. maritima. In addition to improving the functional annotations of associated transporters and catabolic enzymes, this research provides novel insights into the evolution of regulatory networks in Thermotogales.

  15. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  16. On-demand Overlay Networks for Large Scientific Data Transfers

    Energy Technology Data Exchange (ETDEWEB)

    Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Jackson, Keith [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Kissel, Ezra [Univ. of Delaware, Newark, DE (United States); Swany, D. Martin [Univ. of Delaware, Newark, DE (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2009-10-12

    Large scale scientific data transfers are central to scientific processes. Data from large experimental facilities have to be moved to local institutions for analysis or often data needs to be moved between local clusters and large supercomputing centers. In this paper, we propose and evaluate a network overlay architecture to enable highthroughput, on-demand, coordinated data transfers over wide-area networks. Our work leverages Phoebus and On-demand Secure Circuits and AdvanceReservation System (OSCARS) to provide high performance wide-area network connections. OSCARS enables dynamic provisioning of network paths with guaranteed bandwidth and Phoebus enables the coordination and effective utilization of the OSCARS network paths. Our evaluation shows that this approach leads to improved end-to-end data transfer throughput with minimal overheads. The achievedthroughput using our overlay was limited only by the ability of the end hosts to sink the data.

  17. Utilizing Weak Indicators to Detect Anomalous Behaviors in Networks

    Energy Technology Data Exchange (ETDEWEB)

    Egid, Adin Ezra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-06

    We consider the use of a novel weak in- dicator alongside more commonly used weak indicators to help detect anomalous behavior in a large computer network. The data of the network which we are studying in this research paper concerns remote log-in information (Virtual Private Network, or VPN sessions) from the internal network of Los Alamos National Laboratory (LANL). The novel indicator we are utilizing is some- thing which, while novel in its application to data science/cyber security research, is a concept borrowed from the business world. The Her ndahl-Hirschman Index (HHI) is a computationally trivial index which provides a useful heuristic for regulatory agencies to ascertain the relative competitiveness of a particular industry. Using this index as a lagging indicator in the monthly format we have studied could help to detect anomalous behavior by a particular or small set of users on the network. Additionally, we study indicators related to the speed of movement of a user based on the physical location of their current and previous logins. This data can be ascertained from the IP addresses of the users, and is likely very similar to the fraud detection schemes regularly utilized by credit card networks to detect anomalous activity. In future work we would look to nd a way to combine these indicators for use as an internal fraud detection system.

  18. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui; Zhao, Junzhou; Ribeiro, Bruno; Lui, John C. S.; Towsley, Don; Guan, Xiaohong

    2018-01-01

    querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling

  19. Practical characterization of large networks using neighborhood information

    KAUST Repository

    Wang, Pinghui

    2018-02-14

    Characterizing large complex networks such as online social networks through node querying is a challenging task. Network service providers often impose severe constraints on the query rate, hence limiting the sample size to a small fraction of the total network of interest. Various ad hoc subgraph sampling methods have been proposed, but many of them give biased estimates and no theoretical basis on the accuracy. In this work, we focus on developing sampling methods for large networks where querying a node also reveals partial structural information about its neighbors. Our methods are optimized for NoSQL graph databases (if the database can be accessed directly), or utilize Web APIs available on most major large networks for graph sampling. We show that our sampling method has provable convergence guarantees on being an unbiased estimator, and it is more accurate than state-of-the-art methods. We also explore methods to uncover shortest paths between a subset of nodes and detect high degree nodes by sampling only a small fraction of the network of interest. Our results demonstrate that utilizing neighborhood information yields methods that are two orders of magnitude faster than state-of-the-art methods.

  20. Unified Model for Generation Complex Networks with Utility Preferential Attachment

    International Nuclear Information System (INIS)

    Wu Jianjun; Gao Ziyou; Sun Huijun

    2006-01-01

    In this paper, based on the utility preferential attachment, we propose a new unified model to generate different network topologies such as scale-free, small-world and random networks. Moreover, a new network structure named super scale network is found, which has monopoly characteristic in our simulation experiments. Finally, the characteristics of this new network are given.

  1. Opportunistic spectrum utilization in vehicular communication networks

    CERN Document Server

    Cheng, Nan

    2016-01-01

    This brief examines current research on improving Vehicular Networks (VANETs), examining spectrum scarcity due to the dramatic growth of mobile data traffic and the limited bandwidth of dedicated vehicular communication bands and the use of opportunistic spectrum bands to mitigate congestion. It reviews existing literature on the use of opportunistic spectrum bands for VANETs, including licensed and unlicensed spectrum bands and a variety of related technologies, such as cognitive radio, WiFi and device-to-device communications. Focused on analyzing spectrum characteristics, designing efficient spectrum exploitation schemes, and evaluating the date delivery performance when utilizing different opportunistic spectrum bands, the results presented in this brief provide valuable insights on improving the design and deployment of future VANETs.

  2. A new international role for large electric utilities

    International Nuclear Information System (INIS)

    Johnson, P. M.

    1993-01-01

    Population pressures leading to changes in India, China, and South America during the next twenty-five years and the resulting revolutionary shifts in the world's major economic axes, such as growth in populations, in demand for consumer goods, in production capacities, and in energy demand, will demand greater international cooperation according to a former premier of the province of Quebec. He stressed in particular, the contributions that large electrical utilities can play in this world-wide transformation. He predicted the possibility of privatization and an extended role in international energy activities for Hydro-Quebec as a result of these major demographic and economic changes in Asia and South America, and the consequent decline in the economies of the G7 countries. Major capital investments abroad, and the formation of networks of domestic and foreign partnerships in the developing world were predicted to be the key to the survival and continuing success not only of Hydro-Quebec, but all major utility companies

  3. Evolution of a large online social network

    International Nuclear Information System (INIS)

    Hu Haibo; Wang Xiaofan

    2009-01-01

    Although recently there are extensive research on the collaborative networks and online communities, there is very limited knowledge about the actual evolution of the online social networks (OSN). In the Letter, we study the structural evolution of a large online virtual community. We find that the scale growth of the OSN shows non-trivial S shape which may provide a proper exemplification for Bass diffusion model. We reveal that the evolutions of many network properties, such as density, clustering, heterogeneity and modularity, show non-monotone feature, and shrink phenomenon occurs for the path length and diameter of the network. Furthermore, the OSN underwent a transition from degree assortativity characteristic of collaborative networks to degree disassortativity characteristic of many OSNs. Our study has revealed the evolutionary pattern of interpersonal interactions in a specific population and provided a valuable platform for theoretical modeling and further analysis

  4. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    Science.gov (United States)

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  5. Fast unfolding of communities in large networks

    International Nuclear Information System (INIS)

    Blondel, Vincent D; Guillaume, Jean-Loup; Lambiotte, Renaud; Lefebvre, Etienne

    2008-01-01

    We propose a simple method to extract the community structure of large networks. Our method is a heuristic method that is based on modularity optimization. It is shown to outperform all other known community detection methods in terms of computation time. Moreover, the quality of the communities detected is very good, as measured by the so-called modularity. This is shown first by identifying language communities in a Belgian mobile phone network of 2 million customers and by analysing a web graph of 118 million nodes and more than one billion links. The accuracy of our algorithm is also verified on ad hoc modular networks

  6. Large deformation behavior of fat crystal networks

    NARCIS (Netherlands)

    Kloek, W.; Vliet, van T.; Walstra, P.

    2005-01-01

    Compression and wire-cutting experiments on dispersions of fully hydrogenated palm oil in sunflower oil with varying fraction solid fat were carried out to establish which parameters are important for the large deformation behavior of fat crystal networks. Compression experiments showed that the

  7. Utilization of Large Cohesive Interface Elements for Delamination Simulation

    DEFF Research Database (Denmark)

    Bak, Brian Lau Verndal; Lund, Erik

    2012-01-01

    This paper describes the difficulties of utilizing large interface elements in delamination simulation. Solutions to increase the size of applicable interface elements are described and cover numerical integration of the element and modifications of the cohesive law....

  8. Combustion and heat transfer monitoring in large utility boilers

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Arauzo, I.; Valero, A. [University of Zaragoza, Zaragoza (Spain). Center of Power Plant Efficiency Research

    2001-05-01

    The optimization and control of complex energy systems can presently take advantage of highly sophisticated engineering techniques, such as CFD calculations and correlation algorithms based on artificial intelligence concepts. However, the most advanced numerical prediction still relies on strong simplifications of the exact transport equations. Likewise, the output of a neural network is actually based on a long record of observed past responses. Therefore, the implementation of modern diagnosis tools generally requires a great amount of experimental data, in order to achieve an adequate validation of the method. Consequently, a sort of paradox results, since the validation data cannot be less accurate or complete than the predictions sought. To remedy this situation, there are several alternatives. In contrast to laboratory work or well-instrumented pilot plants, the information obtained in the full scale installation offers the advantages of realism and low cost. This paper presents the case-study of a large, pulverized-coal fired utility boiler, discussing both the evaluation of customary measurements and the adoption of supplementary instruments. The generic outcome is that it is possible to significantly improve the knowledge on combustion and heat transfer performance within a reasonable cost. Based on the experience and results, a general methodology is outlined to cope with this kind of analysis.

  9. Combustion and heat transfer monitoring in large utility boilers

    Energy Technology Data Exchange (ETDEWEB)

    Ignacio Diez, L.; Cortes, C.; Arauzo, I.; Valero, A. [Zaragoza Univ., Centro de Investigacion del rendimiento de Centrales Electricas (CIRCE) (Spain)

    2001-05-01

    As a result of the quick and vast development of instrumentation and software capabilities, the optimization and control of complex energy systems can presently take advantage of highly sophisticated engineering techniques, such as CFD calculations and correlation algorithms based on artificial intelligence concepts. However, the most advanced numerical prediction still relies on strong simplifications of the exact transport equations. Likewise, the output of a neural network, or any other refined data-processing device, is actually based in a long record of observed past responses. Therefore, the implementation of modern diagnosis tools generally requires a great amount of experimental data, in order to achieve an adequate validation of the method. Consequently, a sort of paradox results, since the validation data cannot be less accurate or complete than the predictions sought. To remedy this situation, there are several alternatives. In opposition to laboratory work or well-instrumented pilot plants, the information obtained in the full scale installation offers the advantages of realism and low cost. This paper presents the case-study of a large, pulverized-coal fired utility boiler, discussing both the evaluation of customary measurements and the adoption of supplementary instruments. The generic outcome is that it is possible to significantly improve the knowledge on combustion and heat transfer performance within a reasonable cost. Based on the experience and results, a general methodology is outlined to cope with this kind of analysis. (author)

  10. Integration of SPS with utility system networks

    Energy Technology Data Exchange (ETDEWEB)

    Kaupang, B.M.

    1980-06-01

    This paper will discuss the integration of SPS power in electric utility power systems. Specifically treated will be the nature of the power output variations from the spacecraft to the rectenna, the operational characteristics of the rectenna power and the impacts on the electric utility system from utilizing SPS power to serve part of the system load.

  11. Safety evaluation of large ventilation networks

    International Nuclear Information System (INIS)

    Barrocas, M.; Pruchon, P.; Robin, J.P.; Rouyer, J.L.; Salmon, P.

    1981-01-01

    For large ventilation networks, it is necessary to make a safety evaluation of their responses to perturbations such as blower failure, unexpected transfers, local pressurization. This evaluation is not easy to perform because of the many interrelationships between the different parts of the networks, interrelationships coming from the circulations of workers and matetials between cells and rooms and from the usefulness of air transfers through zones of different classifications. This evaluation is all the more necessary since new imperatives in energy savings push for minimizing the air flows, which tends to render the network more sensitive to perturbations. A program to evaluate safety has been developed by the Service de Protection Technique in cooperation with operators and designers of big nuclear facilities and the first applications presented here show the weak points of the installation studied from the safety view point

  12. Shipboard Calibration Network Extension Utilizing COTS Products

    Science.gov (United States)

    2014-09-01

    Identification TCP Transport Control Protocol VNC Virtual Network Computing WLAN Wireless Local Area Network xvi THIS PAGE INTENTIONALLY...available at the location of the sensor to be calibrated. With the wide adoption of the wireless local area network ( WLAN ) protocol, IEEE 802.11...standard devices have been proven to provide a stable, wireless infrastructure for many applications . The fast setup, wire-free configuration and

  13. The Commercial Utilization of Social Networks

    OpenAIRE

    Adlaf, Petr

    2011-01-01

    The presented bachelor's thesis deals with advertisement. It answers the question of what advertisement is, why firms use advertisement and what its benefits are. It concentrates especially on Internet advertisement presented through social networks. These social networks have come to occupy a significant position on the Internet during the last five years and offer new possibilities in terms of creating advertising campaigns (Hypertargeting). The thesis presents the division and comparison o...

  14. Integration of SPS with utility system networks

    Science.gov (United States)

    Kaupang, B. M.

    1980-01-01

    The integration of Satellite Power System (SPS) power in electric utility power systems is discussed. Specifically, the nature of the power output variations from the spacecraft to the rectenna, the operational characteristics of the rectenna power, and the impacts on the electric utility system from utilizing SPS power to serve part of the system load are treated. It is concluded that if RF beam control is an acceptable method for power control, and that the site distribution of SPS rectennas do not cause a very high local penetration (40 to 50%), SPS may be integrated into electric utility system with a few negative impacts. Increased regulating duty on the conventional generation, and a potential impact on system reliability for SPS penetration in excess of about 25% appear to be two areas of concern.

  15. Communities in Large Networks: Identification and Ranking

    DEFF Research Database (Denmark)

    Olsen, Martin

    2008-01-01

    We study the problem of identifying and ranking the members of a community in a very large network with link analysis only, given a set of representatives of the community. We define the concept of a community justified by a formal analysis of a simple model of the evolution of a directed graph. ...... and its immediate surroundings. The members are ranked with a “local” variant of the PageRank algorithm. Results are reported from successful experiments on identifying and ranking Danish Computer Science sites and Danish Chess pages using only a few representatives....

  16. Utilization of social networks in education and their impact on ...

    African Journals Online (AJOL)

    Utilization of social networks in education and their impact on knowledge acquisition ... Developed countries are known to be quick adopters of modern advanced ... in education changing traditional systems to more open and interactive ones.

  17. Comparative analysis of large biomass & coal co-utilization units

    NARCIS (Netherlands)

    Liszka, M.; Nowak, G.; Ptasinski, K.J.; Favrat, D.; Marechal, F.

    2010-01-01

    The co-utilization of coal and biomass in large power units is considered in many countries (e.g. Poland) as fast and effective way of increasing renewable energy share in the fuel mix. Such a method of biomass use is especially suitable for power systems where solid fuels (hard coal, lignite) are

  18. Networking Micro-Processors for Effective Computer Utilization in Nursing

    OpenAIRE

    Mangaroo, Jewellean; Smith, Bob; Glasser, Jay; Littell, Arthur; Saba, Virginia

    1982-01-01

    Networking as a social entity has important implications for maximizing computer resources for improved utilization in nursing. This paper describes the one process of networking of complementary resources at three institutions. Prairie View A&M University, Texas A&M University and the University of Texas School of Public Health, which has effected greater utilization of computers at the college. The results achieved in this project should have implications for nurses, users, and consumers in...

  19. Spectral Methods for Immunization of Large Networks

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad

    2017-11-01

    Full Text Available Given a network of nodes, minimizing the spread of a contagion using a limited budget is a well-studied problem with applications in network security, viral marketing, social networks, and public health. In real graphs, virus may infect a node which in turn infects its neighbour nodes and this may trigger an epidemic in the whole graph. The goal thus is to select the best k nodes (budget constraint that are immunized (vaccinated, screened, filtered so as the remaining graph is less prone to the epidemic. It is known that the problem is, in all practical models, computationally intractable even for moderate sized graphs. In this paper we employ ideas from spectral graph theory to define relevance and importance of nodes. Using novel graph theoretic techniques, we then design an efficient approximation algorithm to immunize the graph. Theoretical guarantees on the running time of our algorithm show that it is more efficient than any other known solution in the literature. We test the performance of our algorithm on several real world graphs. Experiments show that our algorithm scales well for large graphs and outperforms state of the art algorithms both in quality (containment of epidemic and efficiency (runtime and space complexity.

  20. VIRTUAL SOCIAL NETWORKS AND THEIR UTILIZATION FOR PROMOTION

    OpenAIRE

    Robert Stefko; Peter Dorcak; Frantisek Pollak

    2011-01-01

    The article deals with current knowledge of social media with the focus on social networks. Social media offer great opportunities for businesses. However, in order to use these new business channels in the most effective way, businesses need relevant information. The main purpose of this article is to evaluate the state of utilization of social networks by businesses as well as home and foreign customers. The aim is also to point out on the importance of networking as a tool for acquiring an...

  1. A Holistic Management Architecture for Large-Scale Adaptive Networks

    National Research Council Canada - National Science Library

    Clement, Michael R

    2007-01-01

    This thesis extends the traditional notion of network management as an indicator of resource availability and utilization into a systemic model of resource requirements, capabilities, and adaptable...

  2. Maintenance Management in Network Utilities Framework and Practical Implementation

    CERN Document Server

    Gómez Fernández, Juan F

    2012-01-01

    In order to satisfy the needs of their customers, network utilities require specially developed maintenance management capabilities. Maintenance Management information systems are essential to ensure control, gain knowledge and improve-decision making in companies dealing with network infrastructure, such as distribution of gas, water, electricity and telecommunications. Maintenance Management in Network Utilities studies specified characteristics of maintenance management in this sector to offer a practical approach to defining and implementing  the best management practices and suitable frameworks.   Divided into three major sections, Maintenance Management in Network Utilities defines a series of stages which can be followed to manage maintenance frameworks properly. Different case studies provide detailed descriptions which illustrate the experience in real company situations. An introduction to the concepts is followed by main sections including: • A Literature Review: covering the basic concepts an...

  3. Network governance in electricity distribution: Public utility or commodity

    International Nuclear Information System (INIS)

    Kuenneke, Rolf; Fens, Theo

    2005-01-01

    This paper addresses the question whether the operation and management of electricity distribution networks in a liberalized market environment evolves into a market driven commodity business or might be perceived as a genuine public utility task. A framework is developed to classify and compare different institutional arrangements according to the public utility model and the commodity model. These models are exemplified for the case of the Dutch electricity sector. It appears that the institutional organization of electricity distribution networks is at the crossroads of two very different institutional development paths. They develop towards commercial business if the system characteristics of the electricity sector remain basically unchanged to the traditional situation. If however innovative technological developments allow for a decentralization and decomposition of the electricity system, distribution networks might be operated as public utilities while other energy services are exploited commercially. (Author)

  4. Utility communication networks and services specification, deployment and operation

    CERN Document Server

    2017-01-01

    This CIGRE green book begins by addressing the specification and provision of communication services in the context of operational applications for electrical power utilities, before subsequently providing guidelines on the deployment or transformation of networks to deliver these specific communication services. Lastly, it demonstrates how these networks and their services can be monitored, operated, and maintained to ensure that the requisite high level of service quality is consistently achieved.

  5. Public utilities in networks: competition perspectives and new regulations

    International Nuclear Information System (INIS)

    Bergougnoux, J.

    2000-01-01

    This report makes first a status about the historical specificities, the present day situation and the perspectives of evolution of public utilities in networks with respect to the European directive of 1996 and to the 4 sectors of electricity, gas, railway transport and postal service. Then, it wonders about the new institutions and regulation procedures to implement to conciliate the public utility mission with the honest competition. (J.S.)

  6. Small sum privacy and large sum utility in data publishing.

    Science.gov (United States)

    Fu, Ada Wai-Chee; Wang, Ke; Wong, Raymond Chi-Wing; Wang, Jia; Jiang, Minhao

    2014-08-01

    While the study of privacy preserving data publishing has drawn a lot of interest, some recent work has shown that existing mechanisms do not limit all inferences about individuals. This paper is a positive note in response to this finding. We point out that not all inference attacks should be countered, in contrast to all existing works known to us, and based on this we propose a model called SPLU. This model protects sensitive information, by which we refer to answers for aggregate queries with small sums, while queries with large sums are answered with higher accuracy. Using SPLU, we introduce a sanitization algorithm to protect data while maintaining high data utility for queries with large sums. Empirical results show that our method behaves as desired. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Pythoscape: A framework for generation of large protein similarity networks

    OpenAIRE

    Babbitt, Patricia; Barber, AE; Babbitt, PC

    2012-01-01

    Pythoscape is a framework implemented in Python for processing large protein similarity networks for visualization in other software packages. Protein similarity networks are graphical representations of sequence, structural and other similarities among pr

  8. Signaling in large-scale neural networks

    DEFF Research Database (Denmark)

    Berg, Rune W; Hounsgaard, Jørn

    2009-01-01

    We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages of this m......We examine the recent finding that neurons in spinal motor circuits enter a high conductance state during functional network activity. The underlying concomitant increase in random inhibitory and excitatory synaptic activity leads to stochastic signal processing. The possible advantages...... of this metabolically costly organization are analyzed by comparing with synaptically less intense networks driven by the intrinsic response properties of the network neurons....

  9. Comparative Analysis of Different Protocols to Manage Large Scale Networks

    OpenAIRE

    Anil Rao Pimplapure; Dr Jayant Dubey; Prashant Sen

    2013-01-01

    In recent year the numbers, complexity and size is increased in Large Scale Network. The best example of Large Scale Network is Internet, and recently once are Data-centers in Cloud Environment. In this process, involvement of several management tasks such as traffic monitoring, security and performance optimization is big task for Network Administrator. This research reports study the different protocols i.e. conventional protocols like Simple Network Management Protocol and newly Gossip bas...

  10. Investigation on network utilization efficiency and image transmission time for the PACS network

    International Nuclear Information System (INIS)

    Tawara, K.; Nishihara, E.; Komatsu, K.I.

    1987-01-01

    The authors investigated the following features of a PACS network: (1) network utilization efficiency and (2) image transmission time. They changed the following parameters, which the two items shown above depend on: (1) transfer rate between imaging equipment and network (10 kB/econd-8 MB/second), (2) network transmission speed (100 kB/second-50 MB/second), (3) packet length (10 kB-4 MB), and (4) message length (image data) (64 kB-4 MB). As a result, a conventional-type network cannot meet a need for PACS. To solve this problem, the authors propose a multiplexed network that consists of the high-speed network for image transmission and the conventional speed of control network for commands and shorter messages. If the packet length of the image network is designed to be variable, they can choose an optimum packet length for image transmission

  11. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  12. Group Centric Networking: Large Scale Over the Air Testing of Group Centric Networking

    Science.gov (United States)

    2016-11-01

    Large Scale Over-the-Air Testing of Group Centric Networking Logan Mercer, Greg Kuperman, Andrew Hunter, Brian Proulx MIT Lincoln Laboratory...performance of Group Centric Networking (GCN), a networking protocol developed for robust and scalable communications in lossy networks where users are...devices, and the ad-hoc nature of the network . Group Centric Networking (GCN) is a proposed networking protocol that addresses challenges specific to

  13. Theoretical Guidelines for the Utilization of Instructional Social Networking Websites

    Directory of Open Access Journals (Sweden)

    Ilker YAKIN

    2015-10-01

    Full Text Available interaction and communication technologies. Indeed, there has been an emerging movement in the interaction and communication technologies. More specifically, the growth of Web 2.0 technologies has acted as a catalyst for change in the disciplines of education. The social networking websites have gained popularity in recent years; therefore, many research studies have been conducted to explain how the use of social networking websites for instructional purposes. For the best practices, it is essential to understand theories associated with social networking studies because related theories for any subject may provide insights and guideline for professionals and researchers. This theoretical paper was designed to offer a road map through the literature in relation to the utilization of social networking websites by presenting main understandings of theories associated with social networking. The Uses and Gratification Theory, social network theory, connectives, and constructivism were selected to serve as a basis for designing social networking studies regarding instructional purposes. Moreover, common attributes of the theories and specific application areas were also discussed. This paper contributes to this emerging movement by explaining the role of these theories for researchers and practitioners to find ways to beneficially integrate them into their future research endeavors

  14. Quantum communication network utilizing quadripartite entangled states of optical field

    International Nuclear Information System (INIS)

    Shen Heng; Su Xiaolong; Jia Xiaojun; Xie Changde

    2009-01-01

    We propose two types of quantum dense coding communication networks with optical continuous variables, in which a quadripartite entangled state of the optical field with totally three-party correlations of quadrature amplitudes is utilized. In the networks, the exchange of information between any two participants can be manipulated by one or two of the remaining participants. The channel capacities for a variety of communication protocols are numerically calculated. Due to the fact that the quadripartite entangled states applied in the communication systems have been successfully prepared already in the laboratory, the proposed schemes are experimentally accessible at present.

  15. Leveraging network utility management practices for regulatory purposes

    International Nuclear Information System (INIS)

    2009-11-01

    Electric utilities around the globe are entering a phase where they must modernize and implement smart grid technologies. In order to optimize system architecture, asset replacement, and future operating costs, it the utilities must implement robust and flexible asset management structures. This report discussed the ways in which regulators assess investment plans. It focused on the implicit or explicit use of an asset management approach, including principles; processes; input and outputs; decision-making criteria and prioritization methods. The Ontario Energy Board staff were familiarized with the principles and objectives of established and emerging asset management processes and underlying analytic processes, systems and tools in order to ensure that investment information provided by network utilities regarding rates and other applications could be evaluated effectively. Specifically, the report discussed the need for and importance of asset management and provided further details of international markets and their regulatory approaches to asset management. The report also discussed regulatory approaches for review of asset management underlying investment plans as well as an overview of international regulatory practice for review of network utility asset management. It was concluded that options for strengthening regulatory guidance and assessment included utilizing appropriate and effective benchmarking to assess, promote and provide incentives for best practices and steer clear of the potential perverse incentives. 21 tabs., 17 figs., 1 appendix.

  16. Displacement and deformation measurement for large structures by camera network

    Science.gov (United States)

    Shang, Yang; Yu, Qifeng; Yang, Zhen; Xu, Zhiqiang; Zhang, Xiaohu

    2014-03-01

    A displacement and deformation measurement method for large structures by a series-parallel connection camera network is presented. By taking the dynamic monitoring of a large-scale crane in lifting operation as an example, a series-parallel connection camera network is designed, and the displacement and deformation measurement method by using this series-parallel connection camera network is studied. The movement range of the crane body is small, and that of the crane arm is large. The displacement of the crane body, the displacement of the crane arm relative to the body and the deformation of the arm are measured. Compared with a pure series or parallel connection camera network, the designed series-parallel connection camera network can be used to measure not only the movement and displacement of a large structure but also the relative movement and deformation of some interesting parts of the large structure by a relatively simple optical measurement system.

  17. Fiber fault location utilizing traffic signal in optical network.

    Science.gov (United States)

    Zhao, Tong; Wang, Anbang; Wang, Yuncai; Zhang, Mingjiang; Chang, Xiaoming; Xiong, Lijuan; Hao, Yi

    2013-10-07

    We propose and experimentally demonstrate a method for fault location in optical communication network. This method utilizes the traffic signal transmitted across the network as probe signal, and then locates the fault by correlation technique. Compared with conventional techniques, our method has a simple structure and low operation expenditure, because no additional device is used, such as light source, modulator and signal generator. The correlation detection in this method overcomes the tradeoff between spatial resolution and measurement range in pulse ranging technique. Moreover, signal extraction process can improve the location result considerably. Experimental results show that we achieve a spatial resolution of 8 cm and detection range of over 23 km with -8-dBm mean launched power in optical network based on synchronous digital hierarchy protocols.

  18. Utilization of large electromagnetic pumps in the fast breeder reactors

    International Nuclear Information System (INIS)

    Deverge, C.; Lefrere, J.P.; Peturaud, P.; Sauvage, M.

    1984-04-01

    After an overview concerning the induction annular electromagnetic pumps and the dimensioning methods usually utilized, development of these components for a fast breeder integrated reactor is considered: - utilization of cooled EMP in the intermediate circuit, - utilization of immersed pumps, coupled with the intermediate exchanger, for the primary pumping; dimensioning, energetic aspects, and effects on the power plant geometrical configurations [fr

  19. Distributed Emulation in Support of Large Networks

    Science.gov (United States)

    2016-06-01

    Provider LTE Long Term Evolution MB Megabyte MIPS Microprocessor without Interlocked Pipeline Stages MRT Multi-Threaded Routing Toolkit NPS Naval...environment, modifications to a network, protocol, or model can be executed – and the effects measured – without affecting real-world users or services...produce their results when analyzing performance of Long Term Evolution ( LTE ) gateways [3]. Many research scenarios allow problems to be represented

  20. Measuring structural similarity in large online networks.

    Science.gov (United States)

    Shi, Yongren; Macy, Michael

    2016-09-01

    Structural similarity based on bipartite graphs can be used to detect meaningful communities, but the networks have been tiny compared to massive online networks. Scalability is important in applications involving tens of millions of individuals with highly skewed degree distributions. Simulation analysis holding underlying similarity constant shows that two widely used measures - Jaccard index and cosine similarity - are biased by the distribution of out-degree in web-scale networks. However, an alternative measure, the Standardized Co-incident Ratio (SCR), is unbiased. We apply SCR to members of Congress, musical artists, and professional sports teams to show how massive co-following on Twitter can be used to map meaningful affiliations among cultural entities, even in the absence of direct connections to one another. Our results show how structural similarity can be used to map cultural alignments and demonstrate the potential usefulness of social media data in the study of culture, politics, and organizations across the social and behavioral sciences. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Clinical Telemedicine Utilization in Ontario over the Ontario Telemedicine Network.

    Science.gov (United States)

    O'Gorman, Laurel D; Hogenbirk, John C; Warry, Wayne

    2016-06-01

    Northern Ontario is a region in Canada with approximately 775,000 people in communities scattered across 803,000 km(2). The Ontario Telemedicine Network (OTN) facilitates access to medical care in areas that are often underserved. We assessed how OTN utilization differed throughout the province. We used OTN medical service utilization data collected through the Ontario Health Insurance Plan and provided by the Ministry of Health and Long Term Care. Using census subdivisions grouped by Northern and Southern Ontario as well as urban and rural areas, we calculated utilization rates per fiscal year and total from 2008/2009 to 2013/2014. We also used billing codes to calculate utilization by therapeutic area of care. There were 652,337 OTN patient visits in Ontario from 2008/2009 to 2013/2014. Median annual utilization rates per 1,000 people were higher in northern areas (rural, 52.0; urban, 32.1) than in southern areas (rural, 6.1; urban, 3.1). The majority of usage in Ontario was in mental health and addictions (61.8%). Utilization in other areas of care such as surgery, oncology, and internal medicine was highest in the rural north, whereas primary care use was highest in the urban south. Utilization was higher and therapeutic areas of care were more diverse in rural Northern Ontario than in other parts of the province. Utilization was also higher in urban Northern Ontario than in Southern Ontario. This suggests that telemedicine is being used to improve access to medical care services, especially in sparsely populated regions of the province.

  2. LARGE-SCALE TOPOLOGICAL PROPERTIES OF MOLECULAR NETWORKS.

    Energy Technology Data Exchange (ETDEWEB)

    MASLOV,S.SNEPPEN,K.

    2003-11-17

    Bio-molecular networks lack the top-down design. Instead, selective forces of biological evolution shape them from raw material provided by random events such as gene duplications and single gene mutations. As a result individual connections in these networks are characterized by a large degree of randomness. One may wonder which connectivity patterns are indeed random, while which arose due to the network growth, evolution, and/or its fundamental design principles and limitations? Here we introduce a general method allowing one to construct a random null-model version of a given network while preserving the desired set of its low-level topological features, such as, e.g., the number of neighbors of individual nodes, the average level of modularity, preferential connections between particular groups of nodes, etc. Such a null-model network can then be used to detect and quantify the non-random topological patterns present in large networks. In particular, we measured correlations between degrees of interacting nodes in protein interaction and regulatory networks in yeast. It was found that in both these networks, links between highly connected proteins are systematically suppressed. This effect decreases the likelihood of cross-talk between different functional modules of the cell, and increases the overall robustness of a network by localizing effects of deleterious perturbations. It also teaches us about the overall computational architecture of such networks and points at the origin of large differences in the number of neighbors of individual nodes.

  3. Defense strategies for asymmetric networked systems under composite utilities

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S. [ORNL; Ma, Chris Y. T. [Hang Seng Management College, Hon Kong; Hausken, Kjell [University of Stavanger, Norway; He, Fei [Texas A& M University, Kingsville, TX, USA; Yau, David K. Y. [Singapore University of Technology and Design; Zhuang, Jun [University at Buffalo (SUNY)

    2017-11-01

    We consider an infrastructure of networked systems with discrete components that can be reinforced at certain costs to guard against attacks. The communications network plays a critical, asymmetric role of providing the vital connectivity between the systems. We characterize the correlations within this infrastructure at two levels using (a) aggregate failure correlation function that specifies the infrastructure failure probability giventhe failure of an individual system or network, and (b) first order differential conditions on system survival probabilities that characterize component-level correlations. We formulate an infrastructure survival game between an attacker and a provider, who attacks and reinforces individual components, respectively. They use the composite utility functions composed of a survival probability term and a cost term, and the previously studiedsum-form and product-form utility functions are their special cases. At Nash Equilibrium, we derive expressions for individual system survival probabilities and the expected total number of operational components. We apply and discuss these estimates for a simplified model of distributed cloud computing infrastructure

  4. SIMULTANEOUS VISUALIZATION OF DIFFERENT UTILITY NETWORKS FOR DISASTER MANAGEMENT

    Directory of Open Access Journals (Sweden)

    S. Semm

    2012-07-01

    Full Text Available Cartographic visualizations of crises are used to create a Common Operational Picture (COP and enforce Situational Awareness by presenting and representing relevant information. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific decision-making throughout the crises. Since, Operator's attention span and their working memory are limiting factors for the process of getting and interpreting information; the cartographic presentation has to support individuals in coordinating their activities and with handling highly dynamic situations. The Situational Awareness of operators in conjunction with a COP are key aspects of the decision making process and essential for coming to appropriate decisions. Utility networks are one of the most complex and most needed systems within a city. The visualization of utility infrastructure in crisis situations is addressed in this paper. The paper will provide a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.

  5. Simultaneous Visualization of Different Utility Networks for Disaster Management

    Science.gov (United States)

    Semm, S.; Becker, T.; Kolbe, T. H.

    2012-07-01

    Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting and representing relevant information. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific decision-making throughout the crises. Since, Operator's attention span and their working memory are limiting factors for the process of getting and interpreting information; the cartographic presentation has to support individuals in coordinating their activities and with handling highly dynamic situations. The Situational Awareness of operators in conjunction with a COP are key aspects of the decision making process and essential for coming to appropriate decisions. Utility networks are one of the most complex and most needed systems within a city. The visualization of utility infrastructure in crisis situations is addressed in this paper. The paper will provide a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.

  6. Throughput Analysis of Large Wireless Networks with Regular Topologies

    Directory of Open Access Journals (Sweden)

    Hong Kezhu

    2007-01-01

    Full Text Available The throughput of large wireless networks with regular topologies is analyzed under two medium-access control schemes: synchronous array method (SAM and slotted ALOHA. The regular topologies considered are square, hexagon, and triangle. Both nonfading channels and Rayleigh fading channels are examined. Furthermore, both omnidirectional antennas and directional antennas are considered. Our analysis shows that the SAM leads to a much higher network throughput than the slotted ALOHA. The network throughput in this paper is measured in either bits-hops per second per Hertz per node or bits-meters per second per Hertz per node. The exact connection between the two measures is shown for each topology. With these two fundamental units, the network throughput shown in this paper can serve as a reliable benchmark for future works on network throughput of large networks.

  7. Throughput Analysis of Large Wireless Networks with Regular Topologies

    Directory of Open Access Journals (Sweden)

    Kezhu Hong

    2007-04-01

    Full Text Available The throughput of large wireless networks with regular topologies is analyzed under two medium-access control schemes: synchronous array method (SAM and slotted ALOHA. The regular topologies considered are square, hexagon, and triangle. Both nonfading channels and Rayleigh fading channels are examined. Furthermore, both omnidirectional antennas and directional antennas are considered. Our analysis shows that the SAM leads to a much higher network throughput than the slotted ALOHA. The network throughput in this paper is measured in either bits-hops per second per Hertz per node or bits-meters per second per Hertz per node. The exact connection between the two measures is shown for each topology. With these two fundamental units, the network throughput shown in this paper can serve as a reliable benchmark for future works on network throughput of large networks.

  8. Multisector Health Policy Networks in 15 Large US Cities

    Science.gov (United States)

    Leider, J. P.; Carothers, Bobbi J.; Castrucci, Brian C.; Hearne, Shelley

    2016-01-01

    Context: Local health departments (LHDs) have historically not prioritized policy development, although it is one of the 3 core areas they address. One strategy that may influence policy in LHD jurisdictions is the formation of partnerships across sectors to work together on local public health policy. Design: We used a network approach to examine LHD local health policy partnerships across 15 large cities from the Big Cities Health Coalition. Setting/Participants: We surveyed the health departments and their partners about their working relationships in 5 policy areas: core local funding, tobacco control, obesity and chronic disease, violence and injury prevention, and infant mortality. Outcome Measures: Drawing on prior literature linking network structures with performance, we examined network density, transitivity, centralization and centrality, member diversity, and assortativity of ties. Results: Networks included an average of 21.8 organizations. Nonprofits and government agencies made up the largest proportions of the networks, with 28.8% and 21.7% of network members, whereas for-profits and foundations made up the smallest proportions in all of the networks, with just 1.2% and 2.4% on average. Mean values of density, transitivity, diversity, assortativity, centralization, and centrality showed similarity across policy areas and most LHDs. The tobacco control and obesity/chronic disease networks were densest and most diverse, whereas the infant mortality policy networks were the most centralized and had the highest assortativity. Core local funding policy networks had lower scores than other policy area networks by most network measures. Conclusion: Urban LHDs partner with organizations from diverse sectors to conduct local public health policy work. Network structures are similar across policy areas jurisdictions. Obesity and chronic disease, tobacco control, and infant mortality networks had structures consistent with higher performing networks, whereas

  9. Learning Local Components to Understand Large Bayesian Networks

    DEFF Research Database (Denmark)

    Zeng, Yifeng; Xiang, Yanping; Cordero, Jorge

    2009-01-01

    (domain experts) to extract accurate information from a large Bayesian network due to dimensional difficulty. We define a formulation of local components and propose a clustering algorithm to learn such local components given complete data. The algorithm groups together most inter-relevant attributes......Bayesian networks are known for providing an intuitive and compact representation of probabilistic information and allowing the creation of models over a large and complex domain. Bayesian learning and reasoning are nontrivial for a large Bayesian network. In parallel, it is a tough job for users...... in a domain. We evaluate its performance on three benchmark Bayesian networks and provide results in support. We further show that the learned components may represent local knowledge more precisely in comparison to the full Bayesian networks when working with a small amount of data....

  10. New Visions for Large Scale Networks: Research and Applications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This paper documents the findings of the March 12-14, 2001 Workshop on New Visions for Large-Scale Networks: Research and Applications. The workshops objectives were...

  11. Large-scale networks in engineering and life sciences

    CERN Document Server

    Findeisen, Rolf; Flockerzi, Dietrich; Reichl, Udo; Sundmacher, Kai

    2014-01-01

    This edited volume provides insights into and tools for the modeling, analysis, optimization, and control of large-scale networks in the life sciences and in engineering. Large-scale systems are often the result of networked interactions between a large number of subsystems, and their analysis and control are becoming increasingly important. The chapters of this book present the basic concepts and theoretical foundations of network theory and discuss its applications in different scientific areas such as biochemical reactions, chemical production processes, systems biology, electrical circuits, and mobile agents. The aim is to identify common concepts, to understand the underlying mathematical ideas, and to inspire discussions across the borders of the various disciplines.  The book originates from the interdisciplinary summer school “Large Scale Networks in Engineering and Life Sciences” hosted by the International Max Planck Research School Magdeburg, September 26-30, 2011, and will therefore be of int...

  12. Scheduling Data Access in Smart Grid Networks Utilizing Context Information

    DEFF Research Database (Denmark)

    Findrik, Mislav; Grønbæk, Jesper; Olsen, Rasmus Løvenstein

    2014-01-01

    Current electrical grid is facing increased penetration of intermittent energy resources, in particular wind and solar energy. Fast variability of the power supply due to renewable energy resources can be balanced out using different energy storage systems or shifting the loads. Efficiently...... managing this fast flexibility requires two-way data exchange between a controller and sensors/meters via communication networks. In this paper we investigated scheduling of data collection utilizing meta-data from sensors that are describing dynamics of information. We show the applicability...

  13. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    International Nuclear Information System (INIS)

    Rao, Nageswara S; Carter, Steven M; Wu Qishi; Wing, William R; Zhu Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts

  14. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Nageswara S [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Carter, Steven M [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wu Qishi [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Wing, William R [Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Zhu Mengxia [Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803 (United States); Mezzacappa, Anthony [Physics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Veeraraghavan, Malathi [Department of Computer Science, University of Virginia, Charlottesville, VA 22904 (United States); Blondin, John M [Department of Physics, North Carolina State University, Raleigh, NC 27695 (United States)

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  15. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  16. PKI security in large-scale healthcare networks.

    Science.gov (United States)

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  17. Episodic memory in aspects of large-scale brain networks

    Science.gov (United States)

    Jeong, Woorim; Chung, Chun Kee; Kim, June Sic

    2015-01-01

    Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL) structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network (DMN). Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network (RSN). Altered patterns of functional connectivity (FC) among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment. PMID:26321939

  18. Episodic memory in aspects of large-scale brain networks

    Directory of Open Access Journals (Sweden)

    Woorim eJeong

    2015-08-01

    Full Text Available Understanding human episodic memory in aspects of large-scale brain networks has become one of the central themes in neuroscience over the last decade. Traditionally, episodic memory was regarded as mostly relying on medial temporal lobe (MTL structures. However, recent studies have suggested involvement of more widely distributed cortical network and the importance of its interactive roles in the memory process. Both direct and indirect neuro-modulations of the memory network have been tried in experimental treatments of memory disorders. In this review, we focus on the functional organization of the MTL and other neocortical areas in episodic memory. Task-related neuroimaging studies together with lesion studies suggested that specific sub-regions of the MTL are responsible for specific components of memory. However, recent studies have emphasized that connectivity within MTL structures and even their network dynamics with other cortical areas are essential in the memory process. Resting-state functional network studies also have revealed that memory function is subserved by not only the MTL system but also a distributed network, particularly the default-mode network. Furthermore, researchers have begun to investigate memory networks throughout the entire brain not restricted to the specific resting-state network. Altered patterns of functional connectivity among distributed brain regions were observed in patients with memory impairments. Recently, studies have shown that brain stimulation may impact memory through modulating functional networks, carrying future implications of a novel interventional therapy for memory impairment.

  19. A Gossip-based Churn Estimator for Large Dynamic Networks

    NARCIS (Netherlands)

    Giuffrida, C.; Ortolani, S.

    2010-01-01

    Gossip-based aggregation is an emerging paradigm to perform distributed computations and measurements in a large-scale setting. In this paper we explore the possibility of using gossip-based aggregation to estimate churn in arbitrarily large networks. To this end, we introduce a new model to compute

  20. Software Defined Optics and Networking for Large Scale Data Centers

    DEFF Research Database (Denmark)

    Mehmeri, Victor; Andrus, Bogdan-Mihai; Tafur Monroy, Idelfonso

    Big data imposes correlations of large amounts of information between numerous systems and databases. This leads to large dynamically changing flows and traffic patterns between clusters and server racks that result in a decrease of the quality of transmission and degraded application performance....... Highly interconnected topologies combined with flexible, on demand network configuration can become a solution to the ever-increasing dynamic traffic...

  1. Future vision of advanced telecommunication networks for electric utilities; Denki jigyo ni okeru joho tsushin network no shorai vision

    Energy Technology Data Exchange (ETDEWEB)

    Tonaru, S.; Ono, K.; Sakai, S.; Kawai, Y.; Tsuboi, A. [Central Research Institute of Electric Power Industry, Tokyo (Japan); Manabe, S. [Shikoku Electric Power Co., Inc., Kagawa (Japan); Miki, Y. [Kansai Electric Power Co. Inc., Osaka (Japan)

    1995-06-01

    The vision of an advanced information system is proposed to cope with the future social demand and business environmental change in electric utilities. At the large turning point such as drastic reconsideration of Electricity Utilities Industry Law, further improvement of efficiency and cost reduction are requested as well as business innovation such as proposal of a new business policy. For that purpose utilization of information and its technology is indispensable, and use of multimedia and common information in organization are the future direction for improving information basis. Consequently, free information networks without any limitation due to person and media are necessary, and the following are important: high-speed, high-frequency band, digital, easily connectable and multimedia transmission lines, and cost reduction and high reliability of networks. Based on innovation of information networks and the clear principle on advanced information system, development of new applications by multimedia technologies, diffusion of communication terminals, and promotion of standardization are essential. 60 refs., 30 figs., 5 tabs.

  2. Impact of heuristics in clustering large biological networks.

    Science.gov (United States)

    Shafin, Md Kishwar; Kabir, Kazi Lutful; Ridwan, Iffatur; Anannya, Tasmiah Tamzid; Karim, Rashid Saadman; Hoque, Mohammad Mozammel; Rahman, M Sohel

    2015-12-01

    Traditional clustering algorithms often exhibit poor performance for large networks. On the contrary, greedy algorithms are found to be relatively efficient while uncovering functional modules from large biological networks. The quality of the clusters produced by these greedy techniques largely depends on the underlying heuristics employed. Different heuristics based on different attributes and properties perform differently in terms of the quality of the clusters produced. This motivates us to design new heuristics for clustering large networks. In this paper, we have proposed two new heuristics and analyzed the performance thereof after incorporating those with three different combinations in a recently celebrated greedy clustering algorithm named SPICi. We have extensively analyzed the effectiveness of these new variants. The results are found to be promising. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Assessing Residential Customer Satisfaction for Large Electric Utilities

    OpenAIRE

    Lea Kosnik; L. Douglas Smith; Satish Nayak; Maureen Karig; Mark Konya; Kristy Lovett; Zhennan Liu; Harrison Luvai

    2015-01-01

    Electric utilities, like other service organizations, rely on customer surveys to assess the quality of their services and customer relations. With responses to an in-depth survey of 2,216 residential customers, complementary data from geo-coded public sources, aggregate assessments of performance by J.D. Power & Associates from their independent surveys, historical records of individual customer usage and bill payments, streams of published media content and records of actual service deliver...

  4. Pythoscape: a framework for generation of large protein similarity networks.

    Science.gov (United States)

    Barber, Alan E; Babbitt, Patricia C

    2012-11-01

    Pythoscape is a framework implemented in Python for processing large protein similarity networks for visualization in other software packages. Protein similarity networks are graphical representations of sequence, structural and other similarities among proteins for which pairwise all-by-all similarity connections have been calculated. Mapping of biological and other information to network nodes or edges enables hypothesis creation about sequence-structure-function relationships across sets of related proteins. Pythoscape provides several options to calculate pairwise similarities for input sequences or structures, applies filters to network edges and defines sets of similar nodes and their associated data as single nodes (termed representative nodes) for compression of network information and output data or formatted files for visualization.

  5. PKI security in large-scale healthcare networks

    OpenAIRE

    Mantas, G.; Lymberopoulos, D.; Komninos, N.

    2012-01-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a ...

  6. Research on Large-Scale Road Network Partition and Route Search Method Combined with Traveler Preferences

    Directory of Open Access Journals (Sweden)

    De-Xin Yu

    2013-01-01

    Full Text Available Combined with improved Pallottino parallel algorithm, this paper proposes a large-scale route search method, which considers travelers’ route choice preferences. And urban road network is decomposed into multilayers effectively. Utilizing generalized travel time as road impedance function, the method builds a new multilayer and multitasking road network data storage structure with object-oriented class definition. Then, the proposed path search algorithm is verified by using the real road network of Guangzhou city as an example. By the sensitive experiments, we make a comparative analysis of the proposed path search method with the current advanced optimal path algorithms. The results demonstrate that the proposed method can increase the road network search efficiency by more than 16% under different search proportion requests, node numbers, and computing process numbers, respectively. Therefore, this method is a great breakthrough in the guidance field of urban road network.

  7. Optimal assignment of multiple utilities in heat exchange networks

    International Nuclear Information System (INIS)

    Salama, A.I.A.

    2009-01-01

    Existing numerical geometry-based techniques, developed by [A.I.A. Salama, Numerical techniques for determining heat energy targets in pinch analysis, Computers and Chemical Engineering 29 (2005) 1861-1866; A.I.A. Salama, Determination of the optimal heat energy targets in heat pinch analysis using a geometry-based approach, Computers and Chemical Engineering 30 (2006) 758-764], have been extended to optimally assign multiple utilities in heat exchange network (HEN). These techniques utilize the horizontal shift between the cold composite curve (CC) and the stationary hot CC to determine the HEN optimal energy targets, grand composite curve (GCC), and the complement grand composite curve (CGCC). The proposed numerical technique developed in this paper is direct and simultaneously determines the optimal heat-energy targets and optimally assigns multiple utilities as compared with an existing technique based on sequential assignment of multiple utilities. The technique starts by arranging in an ascending order the HEN stream and target temperatures, and the resulting set is labelled T. Furthermore, the temperature sets where multiple utilities are introduced are arranged in an ascending order and are labelled T ic and T ih for the cold and hot sides, respectively. The graphical presentation of the results is facilitated by the insertion at each multiple-utility temperature a perturbed temperature equals the insertion temperature minus a small perturbation. Furthermore, using the heat exchanger network (HEN) minimum temperature-differential approach (ΔT min ) and stream heat-capacity flow rates, the presentation is facilitated by using the conventional temperature shift of the HEN CCs. The set of temperature-shifted stream and target temperatures and perturbed temperatures in the overlap range between the CCs is labelled T ol . Using T ol , a simple formula employing enthalpy-flow differences between the hot composite curve CC h and the cold composite curve CC c is

  8. Predicting Positive and Negative Relationships in Large Social Networks.

    Directory of Open Access Journals (Sweden)

    Guan-Nan Wang

    Full Text Available In a social network, users hold and express positive and negative attitudes (e.g. support/opposition towards other users. Those attitudes exhibit some kind of binary relationships among the users, which play an important role in social network analysis. However, some of those binary relationships are likely to be latent as the scale of social network increases. The essence of predicting latent binary relationships have recently began to draw researchers' attention. In this paper, we propose a machine learning algorithm for predicting positive and negative relationships in social networks inspired by structural balance theory and social status theory. More specifically, we show that when two users in the network have fewer common neighbors, the prediction accuracy of the relationship between them deteriorates. Accordingly, in the training phase, we propose a segment-based training framework to divide the training data into two subsets according to the number of common neighbors between users, and build a prediction model for each subset based on support vector machine (SVM. Moreover, to deal with large-scale social network data, we employ a sampling strategy that selects small amount of training data while maintaining high accuracy of prediction. We compare our algorithm with traditional algorithms and adaptive boosting of them. Experimental results of typical data sets show that our algorithm can deal with large social networks and consistently outperforms other methods.

  9. Predicting Positive and Negative Relationships in Large Social Networks.

    Science.gov (United States)

    Wang, Guan-Nan; Gao, Hui; Chen, Lian; Mensah, Dennis N A; Fu, Yan

    2015-01-01

    In a social network, users hold and express positive and negative attitudes (e.g. support/opposition) towards other users. Those attitudes exhibit some kind of binary relationships among the users, which play an important role in social network analysis. However, some of those binary relationships are likely to be latent as the scale of social network increases. The essence of predicting latent binary relationships have recently began to draw researchers' attention. In this paper, we propose a machine learning algorithm for predicting positive and negative relationships in social networks inspired by structural balance theory and social status theory. More specifically, we show that when two users in the network have fewer common neighbors, the prediction accuracy of the relationship between them deteriorates. Accordingly, in the training phase, we propose a segment-based training framework to divide the training data into two subsets according to the number of common neighbors between users, and build a prediction model for each subset based on support vector machine (SVM). Moreover, to deal with large-scale social network data, we employ a sampling strategy that selects small amount of training data while maintaining high accuracy of prediction. We compare our algorithm with traditional algorithms and adaptive boosting of them. Experimental results of typical data sets show that our algorithm can deal with large social networks and consistently outperforms other methods.

  10. Quality Utilization Aware Based Data Gathering for Vehicular Communication Networks

    Directory of Open Access Journals (Sweden)

    Yingying Ren

    2018-01-01

    Full Text Available The vehicular communication networks, which can employ mobile, intelligent sensing devices with participatory sensing to gather data, could be an efficient and economical way to build various applications based on big data. However, high quality data gathering for vehicular communication networks which is urgently needed faces a lot of challenges. So, in this paper, a fine-grained data collection framework is proposed to cope with these new challenges. Different from classical data gathering which concentrates on how to collect enough data to satisfy the requirements of applications, a Quality Utilization Aware Data Gathering (QUADG scheme is proposed for vehicular communication networks to collect the most appropriate data and to best satisfy the multidimensional requirements (mainly including data gathering quantity, quality, and cost of application. In QUADG scheme, the data sensing is fine-grained in which the data gathering time and data gathering area are divided into very fine granularity. A metric named “Quality Utilization” (QU is to quantify the ratio of quality of the collected sensing data to the cost of the system. Three data collection algorithms are proposed. The first algorithm is to ensure that the application which has obtained the specified quantity of sensing data can minimize the cost and maximize data quality by maximizing QU. The second algorithm is to ensure that the application which has obtained two requests of application (the quantity and quality of data collection, or the quantity and cost of data collection could maximize the QU. The third algorithm is to ensure that the application which aims to satisfy the requirements of quantity, quality, and cost of collected data simultaneously could maximize the QU. Finally, we compare our proposed scheme with the existing schemes via extensive simulations which well justify the effectiveness of our scheme.

  11. Optimization of workflow scheduling in Utility Management System with hierarchical neural network

    Directory of Open Access Journals (Sweden)

    Srdjan Vukmirovic

    2011-08-01

    Full Text Available Grid computing could be the future computing paradigm for enterprise applications, one of its benefits being that it can be used for executing large scale applications. Utility Management Systems execute very large numbers of workflows with very high resource requirements. This paper proposes architecture for a new scheduling mechanism that dynamically executes a scheduling algorithm using feedback about the current status Grid nodes. Two Artificial Neural Networks were created in order to solve the scheduling problem. A case study is created for the Meter Data Management system with measurements from the Smart Metering system for the city of Novi Sad, Serbia. Performance tests show that significant improvement of overall execution time can be achieved by Hierarchical Artificial Neural Networks.

  12. Algorithmic network monitoring for a modern water utility: a case study in Jerusalem.

    Science.gov (United States)

    Armon, A; Gutner, S; Rosenberg, A; Scolnicov, H

    2011-01-01

    We report on the design, deployment, and use of TaKaDu, a real-time algorithmic Water Infrastructure Monitoring solution, with a strong focus on water loss reduction and control. TaKaDu is provided as a commercial service to several customers worldwide. It has been in use at HaGihon, the Jerusalem utility, since mid 2009. Water utilities collect considerable real-time data from their networks, e.g. by means of a SCADA system and sensors measuring flow, pressure, and other data. We discuss how an algorithmic statistical solution analyses this wealth of raw data, flexibly using many types of input and picking out and reporting significant events and failures in the network. Of particular interest to most water utilities is the early detection capability for invisible leaks, also a means for preventing large visible bursts. The system also detects sensor and SCADA failures, various water quality issues, DMA boundary breaches, unrecorded or unintended network changes (like a valve or pump state change), and other events, including types unforeseen during system design. We discuss results from use at HaGihon, showing clear operational value.

  13. Rapid Modeling of and Response to Large Earthquakes Using Real-Time GPS Networks (Invited)

    Science.gov (United States)

    Crowell, B. W.; Bock, Y.; Squibb, M. B.

    2010-12-01

    Real-time GPS networks have the advantage of capturing motions throughout the entire earthquake cycle (interseismic, seismic, coseismic, postseismic), and because of this, are ideal for real-time monitoring of fault slip in the region. Real-time GPS networks provide the perfect supplement to seismic networks, which operate with lower noise and higher sampling rates than GPS networks, but only measure accelerations or velocities, putting them at a supreme disadvantage for ascertaining the full extent of slip during a large earthquake in real-time. Here we report on two examples of rapid modeling of recent large earthquakes near large regional real-time GPS networks. The first utilizes Japan’s GEONET consisting of about 1200 stations during the 2003 Mw 8.3 Tokachi-Oki earthquake about 100 km offshore Hokkaido Island and the second investigates the 2010 Mw 7.2 El Mayor-Cucapah earthquake recorded by more than 100 stations in the California Real Time Network. The principal components of strain were computed throughout the networks and utilized as a trigger to initiate earthquake modeling. Total displacement waveforms were then computed in a simulated real-time fashion using a real-time network adjustment algorithm that fixes a station far away from the rupture to obtain a stable reference frame. Initial peak ground displacement measurements can then be used to obtain an initial size through scaling relationships. Finally, a full coseismic model of the event can be run minutes after the event, given predefined fault geometries, allowing emergency first responders and researchers to pinpoint the regions of highest damage. Furthermore, we are also investigating using total displacement waveforms for real-time moment tensor inversions to look at spatiotemporal variations in slip.

  14. Structural Quality of Service in Large-Scale Networks

    DEFF Research Database (Denmark)

    Pedersen, Jens Myrup

    , telephony and data. To meet the requirements of the different applications, and to handle the increased vulnerability to failures, the ability to design robust networks providing good Quality of Service is crucial. However, most planning of large-scale networks today is ad-hoc based, leading to highly...... complex networks lacking predictability and global structural properties. The thesis applies the concept of Structural Quality of Service to formulate desirable global properties, and it shows how regular graph structures can be used to obtain such properties.......Digitalization has created the base for co-existence and convergence in communications, leading to an increasing use of multi service networks. This is for example seen in the Fiber To The Home implementations, where a single fiber is used for virtually all means of communication, including TV...

  15. Large Amplitude Oscillatory Extension of Soft Polymeric Networks

    DEFF Research Database (Denmark)

    Bejenariu, Anca Gabriela; Rasmussen, Henrik K.; Skov, Anne Ladegaard

    2010-01-01

    sing a filament stretching rheometer (FSR) surrounded by a thermostatic chamber and equipped with a micrometric laser it is possible to measure large amplitude oscillatory elongation (LAOE) on elastomeric based networks with no base flow as in the LAOE method for polymer melts. Poly(dimethylsilox...

  16. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  17. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  18. Development of large-scale functional brain networks in children.

    Directory of Open Access Journals (Sweden)

    Kaustubh Supekar

    2009-07-01

    Full Text Available The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y and 22 young-adults (ages 19-22 y. Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  19. Development of large-scale functional brain networks in children.

    Science.gov (United States)

    Supekar, Kaustubh; Musen, Mark; Menon, Vinod

    2009-07-01

    The ontogeny of large-scale functional organization of the human brain is not well understood. Here we use network analysis of intrinsic functional connectivity to characterize the organization of brain networks in 23 children (ages 7-9 y) and 22 young-adults (ages 19-22 y). Comparison of network properties, including path-length, clustering-coefficient, hierarchy, and regional connectivity, revealed that although children and young-adults' brains have similar "small-world" organization at the global level, they differ significantly in hierarchical organization and interregional connectivity. We found that subcortical areas were more strongly connected with primary sensory, association, and paralimbic areas in children, whereas young-adults showed stronger cortico-cortical connectivity between paralimbic, limbic, and association areas. Further, combined analysis of functional connectivity with wiring distance measures derived from white-matter fiber tracking revealed that the development of large-scale brain networks is characterized by weakening of short-range functional connectivity and strengthening of long-range functional connectivity. Importantly, our findings show that the dynamic process of over-connectivity followed by pruning, which rewires connectivity at the neuronal level, also operates at the systems level, helping to reconfigure and rebalance subcortical and paralimbic connectivity in the developing brain. Our study demonstrates the usefulness of network analysis of brain connectivity to elucidate key principles underlying functional brain maturation, paving the way for novel studies of disrupted brain connectivity in neurodevelopmental disorders such as autism.

  20. Enhancement of large fluctuations to extinction in adaptive networks

    Science.gov (United States)

    Hindes, Jason; Schwartz, Ira B.; Shaw, Leah B.

    2018-01-01

    During an epidemic, individual nodes in a network may adapt their connections to reduce the chance of infection. A common form of adaption is avoidance rewiring, where a noninfected node breaks a connection to an infected neighbor and forms a new connection to another noninfected node. Here we explore the effects of such adaptivity on stochastic fluctuations in the susceptible-infected-susceptible model, focusing on the largest fluctuations that result in extinction of infection. Using techniques from large-deviation theory, combined with a measurement of heterogeneity in the susceptible degree distribution at the endemic state, we are able to predict and analyze large fluctuations and extinction in adaptive networks. We find that in the limit of small rewiring there is a sharp exponential reduction in mean extinction times compared to the case of zero adaption. Furthermore, we find an exponential enhancement in the probability of large fluctuations with increased rewiring rate, even when holding the average number of infected nodes constant.

  1. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  2. Efficient network monitoring for large data acquisition systems

    International Nuclear Information System (INIS)

    Savu, D.O.; Martin, B.; Al-Shabibi, A.; Sjoen, R.; Batraneanu, S.M.; Stancu, S.N.

    2012-01-01

    Though constantly evolving and improving, the available network monitoring solutions have limitations when applied to the infrastructure of a high speed realtime data acquisition (DAQ) system. DAQ networks are particular computer networks where experts have to pay attention to both individual subsections as well as system wide traffic flows while monitoring the network. The ATLAS Network at the Large Hadron Collider (LHC) has more than 200 switches interconnecting 3500 hosts and totaling 8500 high speed links. The use of heterogeneous tools for monitoring various infrastructure parameters, in order to assure optimal DAQ system performance, proved to be a tedious and time consuming task for experts. To alleviate this problem we used our networking and DAQ expertise to build a flexible and scalable monitoring system providing an intuitive user interface with the same look and feel irrespective of the data provider that is used. Our system uses custom developed components for critical performance monitoring and seamlessly integrates complementary data from auxiliary tools, such as NAGIOS, information services or custom databases. A number of techniques (e.g. normalization, aggregation and data caching) were used in order to improve the user interface response time. The end result is a unified monitoring interface, for fast and uniform access to system statistics, which significantly reduced the time spent by experts for ad-hoc and post-mortem analysis. (authors)

  3. Reorganizing Complex Network to Improve Large-Scale Multiagent Teamwork

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2014-01-01

    Full Text Available Large-scale multiagent teamwork has been popular in various domains. Similar to human society infrastructure, agents only coordinate with some of the others, with a peer-to-peer complex network structure. Their organization has been proven as a key factor to influence their performance. To expedite team performance, we have analyzed that there are three key factors. First, complex network effects may be able to promote team performance. Second, coordination interactions coming from their sources are always trying to be routed to capable agents. Although they could be transferred across the network via different paths, their sources and sinks depend on the intrinsic nature of the team which is irrelevant to the network connections. In addition, the agents involved in the same plan often form a subteam and communicate with each other more frequently. Therefore, if the interactions between agents can be statistically recorded, we are able to set up an integrated network adjustment algorithm by combining the three key factors. Based on our abstracted teamwork simulations and the coordination statistics, we implemented the adaptive reorganization algorithm. The experimental results briefly support our design that the reorganized network is more capable of coordinating heterogeneous agents.

  4. On Hybrid Energy Utilization in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Mohammad Tala’t

    2017-11-01

    Full Text Available In a wireless sensor network (WSN, many applications have limited energy resources for data transmission. In order to accomplish a better green communication for WSN, a hybrid energy scheme can supply a more reliable energy source. In this article, hybrid energy utilization—which consists of constant energy source and solar harvested energy—is considered for WSN. To minimize constant energy usage from the hybrid source, a Markov decision process (MDP is designed to find the optimal transmission policy. With a finite packet buffer and a finite battery size, an MDP model is presented to define the states, actions, state transition probabilities, and the cost function including the cost values for all actions. A weighted sum of constant energy source consumption and a packet dropping probability (PDP are adopted as the cost value, enabling us to find the optimal solution for balancing the minimization of the constant energy source utilization and the PDP using a value iteration algorithm. As shown in the simulation results, the performance of optimal solution using MDP achieves a significant improvement compared to solution without its use.

  5. An alternative respiratory sounds classification system utilizing artificial neural networks

    Directory of Open Access Journals (Sweden)

    Rami J Oweis

    2015-04-01

    Full Text Available Background: Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. Methods: This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs and adaptive neuro-fuzzy inference systems (ANFIS toolboxes. The methods have been applied to 10 different respiratory sounds for classification. Results: The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. Conclusions: The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  6. Large-Scale Analysis of Network Bistability for Human Cancers

    Science.gov (United States)

    Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki

    2010-01-01

    Protein–protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618

  7. Aggregated Representation of Distribution Networks for Large-Scale Transmission Network Simulations

    DEFF Research Database (Denmark)

    Göksu, Ömer; Altin, Müfit; Sørensen, Poul Ejnar

    2014-01-01

    As a common practice of large-scale transmission network analysis the distribution networks have been represented as aggregated loads. However, with increasing share of distributed generation, especially wind and solar power, in the distribution networks, it became necessary to include...... the distributed generation within those analysis. In this paper a practical methodology to obtain aggregated behaviour of the distributed generation is proposed. The methodology, which is based on the use of the IEC standard wind turbine models, is applied on a benchmark distribution network via simulations....

  8. Multiscale analysis of spreading in a large communication network

    International Nuclear Information System (INIS)

    Kivelä, Mikko; Pan, Raj Kumar; Kaski, Kimmo; Kertész, János; Saramäki, Jari; Karsai, Márton

    2012-01-01

    In temporal networks, both the topology of the underlying network and the timings of interaction events can be crucial in determining how a dynamic process mediated by the network unfolds. We have explored the limiting case of the speed of spreading in the SI model, set up such that an event between an infectious and a susceptible individual always transmits the infection. The speed of this process sets an upper bound for the speed of any dynamic process that is mediated through the interaction events of the network. With the help of temporal networks derived from large-scale time-stamped data on mobile phone calls, we extend earlier results that indicate the slowing-down effects of burstiness and temporal inhomogeneities. In such networks, links are not permanently active, but dynamic processes are mediated by recurrent events taking place on the links at specific points in time. We perform a multiscale analysis and pinpoint the importance of the timings of event sequences on individual links, their correlations with neighboring sequences, and the temporal pathways taken by the network-scale spreading process. This is achieved by studying empirically and analytically different characteristic relay times of links, relevant to the respective scales, and a set of temporal reference models that allow for removing selected time-domain correlations one by one. Our analysis shows that for the spreading velocity, time-domain inhomogeneities are as important as the network topology, which indicates the need to take time-domain information into account when studying spreading dynamics. In particular, results for the different characteristic relay times underline the importance of the burstiness of individual links

  9. Complex modular structure of large-scale brain networks

    Science.gov (United States)

    Valencia, M.; Pastor, M. A.; Fernández-Seara, M. A.; Artieda, J.; Martinerie, J.; Chavez, M.

    2009-06-01

    Modular structure is ubiquitous among real-world networks from related proteins to social groups. Here we analyze the modular organization of brain networks at a large scale (voxel level) extracted from functional magnetic resonance imaging signals. By using a random-walk-based method, we unveil the modularity of brain webs and show modules with a spatial distribution that matches anatomical structures with functional significance. The functional role of each node in the network is studied by analyzing its patterns of inter- and intramodular connections. Results suggest that the modular architecture constitutes the structural basis for the coexistence of functional integration of distant and specialized brain areas during normal brain activities at rest.

  10. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Meyer Patrick

    2007-01-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  11. Information-Theoretic Inference of Large Transcriptional Regulatory Networks

    Directory of Open Access Journals (Sweden)

    Patrick E. Meyer

    2007-06-01

    Full Text Available The paper presents MRNET, an original method for inferring genetic networks from microarray data. The method is based on maximum relevance/minimum redundancy (MRMR, an effective information-theoretic technique for feature selection in supervised learning. The MRMR principle consists in selecting among the least redundant variables the ones that have the highest mutual information with the target. MRNET extends this feature selection principle to networks in order to infer gene-dependence relationships from microarray data. The paper assesses MRNET by benchmarking it against RELNET, CLR, and ARACNE, three state-of-the-art information-theoretic methods for large (up to several thousands of genes network inference. Experimental results on thirty synthetically generated microarray datasets show that MRNET is competitive with these methods.

  12. A large fiber sensor network for an acoustic neutrino telescope

    Directory of Open Access Journals (Sweden)

    Buis Ernst-Jan

    2017-01-01

    Full Text Available The scientific prospects of detecting neutrinos with an energy close or even higher than the GKZ cut-off energy has been discussed extensively in literature. It is clear that due to their expected low flux, the detection of these ultra-high energy neutrinos (Ev > 1018 eV requires a telescope larger than 100 km3. Acoustic detection may provide a way to observe these ultra-high energy cosmic neutrinos, as sound that they induce in the deep sea when neutrinos lose their energy travels undisturbed for many kilometers. To realize a large scale acoustic neutrino telescope, dedicated technology must be developed that allows for a deep sea sensor network. Fiber optic hydrophone technology provides a promising means to establish a large scale sensor network [1] with the proper sensitivity to detect the small signals from the neutrino interactions.

  13. Locating inefficient links in a large-scale transportation network

    Science.gov (United States)

    Sun, Li; Liu, Like; Xu, Zhongzhi; Jie, Yang; Wei, Dong; Wang, Pu

    2015-02-01

    Based on data from geographical information system (GIS) and daily commuting origin destination (OD) matrices, we estimated the distribution of traffic flow in the San Francisco road network and studied Braess's paradox in a large-scale transportation network with realistic travel demand. We measured the variation of total travel time Δ T when a road segment is closed, and found that | Δ T | follows a power-law distribution if Δ T 0. This implies that most roads have a negligible effect on the efficiency of the road network, while the failure of a few crucial links would result in severe travel delays, and closure of a few inefficient links would counter-intuitively reduce travel costs considerably. Generating three theoretical networks, we discovered that the heterogeneously distributed travel demand may be the origin of the observed power-law distributions of | Δ T | . Finally, a genetic algorithm was used to pinpoint inefficient link clusters in the road network. We found that closing specific road clusters would further improve the transportation efficiency.

  14. A large number of stepping motor network construction by PLC

    Science.gov (United States)

    Mei, Lin; Zhang, Kai; Hongqiang, Guo

    2017-11-01

    In the flexible automatic line, the equipment is complex, the control mode is flexible, how to realize the large number of step and servo motor information interaction, the orderly control become a difficult control. Based on the existing flexible production line, this paper makes a comparative study of its network strategy. After research, an Ethernet + PROFIBUSE communication configuration based on PROFINET IO and profibus was proposed, which can effectively improve the data interaction efficiency of the equipment and stable data interaction information.

  15. IFC to CityGML Transformation Framework for Geo-Analysis : A Water Utility Network Case

    NARCIS (Netherlands)

    Hijazi, I.; Ehlers, M.; Zlatanova, S.; Isikdag, U.

    2009-01-01

    The development of semantic 3D city models has allowed for new approaches to town planning and urban management (Benner et al. 2005) such as emergency and catastrophe planning, checking building developments, and utility networks. Utility networks inside buildings are composed of pipes and cables

  16. State of the art of the virtual utility: the smart distributed generation network

    International Nuclear Information System (INIS)

    Coll-Mayor, D.; Picos, R.; Garcia-Moreno, E.

    2004-01-01

    The world of energy has lately experienced a revolution, and new rules are being defined. The climate change produced by the greenhouse gases, the inefficiency of the energy system or the lack of power supply infrastructure in most of the poor countries, the liberalization of the energy market and the development of new technologies in the field of distributed generation (DG) are the key factors of this revolution. It seems clear that the solution at the moment is the DG. The advantage of DG is the energy generation close to the demand point. It means that DG can lower costs, reduce emissions, or expand the energy options of the consumers. DG may add redundancy that increases grid security even while powering emergency lighting or other critical systems and reduces power losses in the electricity distribution. After the development of the different DG and high efficiency technologies such as co-generation and tri-generation, the next step in the DG world is the interconnection of different small distributed generation facilities which act together in a DG network as a large power plant controlled by a centralized energy management system (EMS). The main aim of the EMS is to reach the targets of low emissions and high efficiency. The EMS gives priority to renewable energy sources instead of the use of fossil fuels. This new concept of energy infrastructure is referred to as virtual utility (VU). The VU can be defined as a new model of energy infrastructure which consists of integrating different kind of distributed generation utilities in an energy (electricity and heat) generation network controlled by a central energy management system (EMS). The electricity production in the network is subordinated to the heat necessity of every user. The thermal energy is consumed on site; the electricity is generated and distributed in the entire network. The network is composed of one centralized control with the EMS and different clusters of distributed generation utilities

  17. The utilization of social networking as promotion media (Case study: Handicraft business in Palembang)

    OpenAIRE

    Rahadi, Dedi Rianto; Abdillah, Leon Andretti

    2013-01-01

    Nowadays social media (Twitter, Facebook, etc.), not only simply as communication media, but also for promotion. Social networking media offers many business benefits for companies and organizations. Research purposes is to determine the model of social network media utilization as a promotional media for handicraft business in Palembang city. Qualitative and quantitative research design are used to know how handicraft business in Palembang city utilizing social media networking as a promotio...

  18. Empirical Models of Social Learning in a Large, Evolving Network.

    Directory of Open Access Journals (Sweden)

    Ayşe Başar Bener

    Full Text Available This paper advances theories of social learning through an empirical examination of how social networks change over time. Social networks are important for learning because they constrain individuals' access to information about the behaviors and cognitions of other people. Using data on a large social network of mobile device users over a one-month time period, we test three hypotheses: 1 attraction homophily causes individuals to form ties on the basis of attribute similarity, 2 aversion homophily causes individuals to delete existing ties on the basis of attribute dissimilarity, and 3 social influence causes individuals to adopt the attributes of others they share direct ties with. Statistical models offer varied degrees of support for all three hypotheses and show that these mechanisms are more complex than assumed in prior work. Although homophily is normally thought of as a process of attraction, people also avoid relationships with others who are different. These mechanisms have distinct effects on network structure. While social influence does help explain behavior, people tend to follow global trends more than they follow their friends.

  19. Utility-Based Link Recommendation in Social Networks

    Science.gov (United States)

    Li, Zhepeng

    2013-01-01

    Link recommendation, which suggests links to connect currently unlinked users, is a key functionality offered by major online social networking platforms. Salient examples of link recommendation include "people you may know"' on Facebook and "who to follow" on Twitter. A social networking platform has two types of stakeholder:…

  20. IR wireless cluster synapses of HYDRA very large neural networks

    Science.gov (United States)

    Jannson, Tomasz; Forrester, Thomas

    2008-04-01

    RF/IR wireless (virtual) synapses are critical components of HYDRA (Hyper-Distributed Robotic Autonomy) neural networks, already discussed in two earlier papers. The HYDRA network has the potential to be very large, up to 10 11-neurons and 10 18-synapses, based on already established technologies (cellular RF telephony and IR-wireless LANs). It is organized into almost fully connected IR-wireless clusters. The HYDRA neurons and synapses are very flexible, simple, and low-cost. They can be modified into a broad variety of biologically-inspired brain-like computing capabilities. In this third paper, we focus on neural hardware in general, and on IR-wireless synapses in particular. Such synapses, based on LED/LD-connections, dominate the HYDRA neural cluster.

  1. Measuring large-scale social networks with high resolution.

    Directory of Open Access Journals (Sweden)

    Arkadiusz Stopczynski

    Full Text Available This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years-the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics for a densely connected population of 1000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.

  2. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram; Kammoun, Abla; Alouini, Mohamed-Slim

    2017-01-01

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  3. Coordinated SLNR based Precoding in Large-Scale Heterogeneous Networks

    KAUST Repository

    Boukhedimi, Ikram

    2017-03-06

    This work focuses on the downlink of large-scale two-tier heterogeneous networks composed of a macro-cell overlaid by micro-cell networks. Our interest is on the design of coordinated beamforming techniques that allow to mitigate the inter-cell interference. Particularly, we consider the case in which the coordinating base stations (BSs) have imperfect knowledge of the channel state information. Under this setting, we propose a regularized SLNR based precoding design in which the regularization factor is used to allow better resilience with respect to the channel estimation errors. Based on tools from random matrix theory, we provide an analytical analysis of the SINR and SLNR performances. These results are then exploited to propose a proper setting of the regularization factor. Simulation results are finally provided in order to validate our findings and to confirm the performance of the proposed precoding scheme.

  4. Foundational perspectives on causality in large-scale brain networks

    Science.gov (United States)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  5. Revisiting Social Network Utilization by Physicians-in-Training.

    Science.gov (United States)

    Black, Erik W; Thompson, Lindsay A; Duff, W Patrick; Dawson, Kara; Saliba, Heidi; Black, Nicole M Paradise

    2010-06-01

    To measure and compare the frequency and content of online social networking among 2 cohorts of medical students and residents (2007 and 2009). Using the online social networking application Facebook, we evaluated social networking profiles for 2 cohorts of medical students (n  =  528) and residents (n  =  712) at the University of Florida in Gainesville. Objective measures included existence of a profile, whether it was made private, and whether any personally identifiable information was included. Subjective outcomes included photographic content, affiliated social groups, and personal information not generally disclosed in a doctor-patient encounter. We compared our results to our previously published and reported data from 2007. Social networking continues to be common amongst physicians-in-training, with 39.8% of residents and 69.5% of medical students maintaining Facebook accounts. Residents' participation significantly increased (P privacy settings (P privacy and the expansive and impersonal networks of online "friends" who may view profiles.

  6. Self-Optimization of LTE Networks Utilizing Celnet Xplorer

    CERN Document Server

    Buvaneswari, A; Polakos, Paul; Buvaneswari, Arumugam

    2010-01-01

    In order to meet demanding performance objectives in Long Term Evolution (LTE) networks, it is mandatory to implement highly efficient, autonomic self-optimization and configuration processes. Self-optimization processes have already been studied in second generation (2G) and third generation (3G) networks, typically with the objective of improving radio coverage and channel capacity. The 3rd Generation Partnership Project (3GPP) standard for LTE self-organization of networks (SON) provides guidelines on self-configuration of physical cell ID and neighbor relation function and self-optimization for mobility robustness, load balancing, and inter-cell interference reduction. While these are very important from an optimization perspective of local phenomenon (i.e., the eNodeB's interaction with its neighbors), it is also essential to architect control algorithms to optimize the network as a whole. In this paper, we propose a Celnet Xplorer-based SON architecture that allows detailed analysis of network performan...

  7. Large scale network management. Condition indicators for network stations, high voltage power conductions and cables

    International Nuclear Information System (INIS)

    Eggen, Arnt Ove; Rolfseng, Lars; Langdal, Bjoern Inge

    2006-02-01

    In the Strategic Institute Programme (SIP) 'Electricity Business enters e-business (eBee)' SINTEF Energy research has developed competency that can help the energy business employ ICT systems and computer technology in an improved way. Large scale network management is now a reality, and it is characterized by large entities with increasing demands on efficiency and quality. These are goals that can only be reached by using ICT systems and computer technology in a more clever way than what is the case today. At the same time it is important that knowledge held by experienced co-workers is consulted when formal rules for evaluations and decisions in ICT systems are developed. In this project an analytical concept for evaluation of networks based information in different ICT systems has been developed. The method estimating the indicators to describe different conditions in a network is general, and indicators can be made to fit different levels of decision and network levels, for example network station, transformer circuit, distribution network and regional network. Moreover, the indicators can contain information about technical aspects, economy and HSE. An indicator consists of an indicator name, an indicator value, and an indicator colour based on a traffic-light analogy to indicate a condition or a quality for the indicator. Values on one or more indicators give an impression of important conditions in the network, and make up the basis for knowing where more detailed evaluations have to be conducted before a final decision on for example maintenance or renewal is made. A prototype has been developed for testing the new method. The prototype has been developed in Excel, and especially designed for analysing transformer circuits in a distribution network. However, the method is a general one, and well suited for implementation in a commercial computer system (ml)

  8. Temperature dependence of the multistability of lactose utilization network of Escherichia coli

    Science.gov (United States)

    Nepal, Sudip; Kumar, Pradeep

    Biological systems are capable of producing multiple states out of a single set of inputs. Multistability acts like a biological switch that allows organisms to respond differently to different environmental conditions and hence plays an important role in adaptation to changing environment. One of the widely studied gene regulatory networks underlying the metabolism of bacteria is the lactose utilization network, which exhibits a multistable behavior as a function of lactose concentration. We have studied the effect of temperature on multistability of the lactose utilization network at various concentrations of thio-methylgalactoside (TMG), a synthetic lactose. We find that while the lactose utilization network exhibits a bistable behavior for temperature T >20° C , a graded response arises for temperature T lactose utilization network as a function of temperature and TMG concentration. Our results suggest that environmental conditions, in this case temperature, can alter the nature of cellular regulation of metabolism.

  9. Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther H.; Sergui, John S.

    2013-01-01

    A large-scale network that supports a large number of users can have an aggregate data rate of hundreds of Mbps at any time. High-fidelity simulation of a large-scale network might be too complicated and memory-intensive for typical commercial-off-the-shelf (COTS) tools. Unlike a large commercial wide-area-network (WAN) that shares diverse network resources among diverse users and has a complex topology that requires routing mechanism and flow control, the ground communication links of a space network operate under the assumption of a guaranteed dedicated bandwidth allocation between specific sparse endpoints in a star-like topology. This work solved the network design problem of estimating the bandwidths of a ground network architecture option that offer different service classes to meet the latency requirements of different user data types. In this work, a top-down analysis and simulation approach was created to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. These techniques were used to estimate the WAN bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network. A new analytical approach, called the "leveling scheme," was developed to model the store-and-forward mechanism of the network data flow. The term "leveling" refers to the spreading of data across a longer time horizon without violating the corresponding latency requirement of the data type. Two versions of the leveling scheme were developed: 1. A straightforward version that simply spreads the data of each data type across the time horizon and doesn't take into account the interactions among data types within a pass, or between data types across overlapping passes at a network node, and is inherently sub-optimal. 2. Two-state Markov leveling scheme that takes into account the second order behavior of

  10. A document preparation system in a large network environment

    Energy Technology Data Exchange (ETDEWEB)

    Vigil, M.; Bouchier, S.; Sanders, C.; Sydoriak, S.; Wheeler, K.

    1988-01-01

    At Los Alamos National Laboratory, we have developed an integrated document preparation system that produces publication-quality documents. This system combines text formatters and computer graphics capabilities that have been adapted to meet the needs of users in a large scientific research laboratory. This paper describes the integration of document processing technology to develop a system architecture, based on a page description language, to provide network-wide capabilities in a distributed computing environment. We describe the Laboratory requirements, the integration and implementation issues, and the challenges we faced developing this system.

  11. Developing A Large-Scale, Collaborative, Productive Geoscience Education Network

    Science.gov (United States)

    Manduca, C. A.; Bralower, T. J.; Egger, A. E.; Fox, S.; Ledley, T. S.; Macdonald, H.; Mcconnell, D. A.; Mogk, D. W.; Tewksbury, B. J.

    2012-12-01

    Over the past 15 years, the geoscience education community has grown substantially and developed broad and deep capacity for collaboration and dissemination of ideas. While this community is best viewed as emergent from complex interactions among changing educational needs and opportunities, we highlight the role of several large projects in the development of a network within this community. In the 1990s, three NSF projects came together to build a robust web infrastructure to support the production and dissemination of on-line resources: On The Cutting Edge (OTCE), Earth Exploration Toolbook, and Starting Point: Teaching Introductory Geoscience. Along with the contemporaneous Digital Library for Earth System Education, these projects engaged geoscience educators nationwide in exploring professional development experiences that produced lasting on-line resources, collaborative authoring of resources, and models for web-based support for geoscience teaching. As a result, a culture developed in the 2000s in which geoscience educators anticipated that resources for geoscience teaching would be shared broadly and that collaborative authoring would be productive and engaging. By this time, a diverse set of examples demonstrated the power of the web infrastructure in supporting collaboration, dissemination and professional development . Building on this foundation, more recent work has expanded both the size of the network and the scope of its work. Many large research projects initiated collaborations to disseminate resources supporting educational use of their data. Research results from the rapidly expanding geoscience education research community were integrated into the Pedagogies in Action website and OTCE. Projects engaged faculty across the nation in large-scale data collection and educational research. The Climate Literacy and Energy Awareness Network and OTCE engaged community members in reviewing the expanding body of on-line resources. Building Strong

  12. Review of Recommender Systems Algorithms Utilized in Social Networks based e-Learning Systems & Neutrosophic System

    Directory of Open Access Journals (Sweden)

    A. A. Salama

    2015-03-01

    Full Text Available In this paper, we present a review of different recommender system algorithms that are utilized in social networks based e-Learning systems. Future research will include our proposed our e-Learning system that utilizes Recommender System and Social Network. Since the world is full of indeterminacy, the neutrosophics found their place into contemporary research. The fundamental concepts of neutrosophic set, introduced by Smarandache in [21, 22, 23] and Salama et al. in [24-66].The purpose of this paper is to utilize a neutrosophic set to analyze social networks data conducted through learning activities.

  13. Incentive Regulation and Utility Benchmarking for Electricity Network Security

    OpenAIRE

    Zhang, Y.; Nepal, R.

    2014-01-01

    The incentive regulation of costs related to physical and cyber security in electricity networks is an important but relatively unexplored and ambiguous issue. These costs can be part of cost efficiency benchmarking or, alternatively, dealt with separately. This paper discusses the issues and proposes options for incorporating network security costs within incentive regulation in a benchmarking framework. The relevant concerns and limitations associated with the accounting and classification ...

  14. Utilization of Selected Data Mining Methods for Communication Network Analysis

    Directory of Open Access Journals (Sweden)

    V. Ondryhal

    2011-06-01

    Full Text Available The aim of the project was to analyze the behavior of military communication networks based on work with real data collected continuously since 2005. With regard to the nature and amount of the data, data mining methods were selected for the purpose of analyses and experiments. The quality of real data is often insufficient for an immediate analysis. The article presents the data cleaning operations which have been carried out with the aim to improve the input data sample to obtain reliable models. Gradually, by means of properly chosen SW, network models were developed to verify generally valid patterns of network behavior as a bulk service. Furthermore, unlike the commercially available communication networks simulators, the models designed allowed us to capture nonstandard models of network behavior under an increased load, verify the correct sizing of the network to the increased load, and thus test its reliability. Finally, based on previous experience, the models enabled us to predict emergency situations with a reasonable accuracy.

  15. Large deep neural networks for MS lesion segmentation

    Science.gov (United States)

    Prieto, Juan C.; Cavallari, Michele; Palotai, Miklos; Morales Pinzon, Alfredo; Egorova, Svetlana; Styner, Martin; Guttmann, Charles R. G.

    2017-02-01

    Multiple sclerosis (MS) is a multi-factorial autoimmune disorder, characterized by spatial and temporal dissemination of brain lesions that are visible in T2-weighted and Proton Density (PD) MRI. Assessment of lesion burden and is useful for monitoring the course of the disease, and assessing correlates of clinical outcomes. Although there are established semi-automated methods to measure lesion volume, most of them require human interaction and editing, which are time consuming and limits the ability to analyze large sets of data with high accuracy. The primary objective of this work is to improve existing segmentation algorithms and accelerate the time consuming operation of identifying and validating MS lesions. In this paper, a Deep Neural Network for MS Lesion Segmentation is implemented. The MS lesion samples are extracted from the Partners Comprehensive Longitudinal Investigation of Multiple Sclerosis (CLIMB) study. A set of 900 subjects with T2, PD and a manually corrected label map images were used to train a Deep Neural Network and identify MS lesions. Initial tests using this network achieved a 90% accuracy rate. A secondary goal was to enable this data repository for big data analysis by using this algorithm to segment the remaining cases available in the CLIMB repository.

  16. Computational study of noise in a large signal transduction network

    Directory of Open Access Journals (Sweden)

    Ruohonen Keijo

    2011-06-01

    Full Text Available Abstract Background Biochemical systems are inherently noisy due to the discrete reaction events that occur in a random manner. Although noise is often perceived as a disturbing factor, the system might actually benefit from it. In order to understand the role of noise better, its quality must be studied in a quantitative manner. Computational analysis and modeling play an essential role in this demanding endeavor. Results We implemented a large nonlinear signal transduction network combining protein kinase C, mitogen-activated protein kinase, phospholipase A2, and β isoform of phospholipase C networks. We simulated the network in 300 different cellular volumes using the exact Gillespie stochastic simulation algorithm and analyzed the results in both the time and frequency domain. In order to perform simulations in a reasonable time, we used modern parallel computing techniques. The analysis revealed that time and frequency domain characteristics depend on the system volume. The simulation results also indicated that there are several kinds of noise processes in the network, all of them representing different kinds of low-frequency fluctuations. In the simulations, the power of noise decreased on all frequencies when the system volume was increased. Conclusions We concluded that basic frequency domain techniques can be applied to the analysis of simulation results produced by the Gillespie stochastic simulation algorithm. This approach is suited not only to the study of fluctuations but also to the study of pure noise processes. Noise seems to have an important role in biochemical systems and its properties can be numerically studied by simulating the reacting system in different cellular volumes. Parallel computing techniques make it possible to run massive simulations in hundreds of volumes and, as a result, accurate statistics can be obtained from computational studies.

  17. Large Scale Experiments of Multihop Networks in Mobile Scenarios

    Directory of Open Access Journals (Sweden)

    Yacine Benchaïb

    2016-03-01

    Full Text Available This paper presents the latest advances in our research work focused on VIRMANEL and SILUMOD, a couple of tools developed for research in wireless mobile multihop networks. SILUMOD is a domain specific language dedicated to the definition of mobility models. This language contains key- words and special operators that make it easy to define a mobility model and calculate the positions of a trajectory. These positions are sent to VIRMANEL, a tool that man- ages virtual machines corresponding to mobile nodes, emu- lates their movements and the resulting connections and dis- connections, and displays the network evolution to the user, thanks to its graphical user interface. The virtualization ap- proach we take here allows to run real code and to test real protocol implementations without deploying an important experimental platform. For the experimentation of a large number of virtual mobile nodes, we defined and implemented a new algorithm for the nearest neighbor search to find the nodes that are within communication range. We then car- ried out a considerable measurement campaign in order to evaluate the performance of this algorithm. The results show that even with an experiment using a large number of mobile nodes, our algorithm make it possible to evaluate the state of connectivity between mobile nodes within a reasonable time and number of operations.

  18. Full-Duplex Communications in Large-Scale Cellular Networks

    KAUST Repository

    AlAmmouri, Ahmad

    2016-04-01

    In-band full-duplex (FD) communications have been optimistically promoted to improve the spectrum utilization and efficiency. However, the penetration of FD communications to the cellular networks domain is challenging due to the imposed uplink/downlink interference. This thesis presents a tractable framework, based on stochastic geometry, to study FD communications in multi-tier cellular networks. Particularly, we assess the FD communications effect on the network performance and quantify the associated gains. The study proves the vulnerability of the uplink to the downlink interference and shows that the improved FD rate gains harvested in the downlink (up to 97%) comes at the expense of a significant degradation in the uplink rate (up to 94%). Therefore, we propose a novel fine-grained duplexing scheme, denoted as α-duplex scheme, which allows a partial overlap between the uplink and the downlink frequency bands. We derive the required conditions to harvest rate gains from the α-duplex scheme and show its superiority to both the FD and half-duplex (HD) schemes. In particular, we show that the α-duplex scheme provides a simultaneous improvement of 28% for the downlink rate and 56% for the uplink rate. We also show that the amount of the overlap can be optimized based on the network design objective. Moreover, backward compatibility is an essential ingredient for the success of new technologies. In the context of in-band FD communication, FD base stations (BSs) should support HD users\\' equipment (UEs) without sacrificing the foreseen FD gains. The results show that FD-UEs are not necessarily required to harvest rate gains from FD-BSs. In particular, the results show that adding FD-UEs to FD-BSs offers a maximum of 5% rate gain over FD-BSs and HD-UEs case, which is a marginal gain compared to the burden required to implement FD transceivers at the UEs\\' side. To this end, we shed light on practical scenarios where HD-UEs operation with FD-BSs outperforms the

  19. GPP Webinar: Solar Utilization in Higher Education Networking & Information Sharing Group: Financing Issues Discussion

    Science.gov (United States)

    This presentation from a Solar Utilization in Higher Education Networking and Information webinar covers financing and project economics issues related to solar project development in the higher education sector.

  20. An inkjet-printed UWB antenna on paper substrate utilizing a novel fractal matching network

    KAUST Repository

    Cook, Benjamin Stassen; Shamim, Atif

    2012-01-01

    In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate

  1. A Technical Approach on Large Data Distributed Over a Network

    Directory of Open Access Journals (Sweden)

    Suhasini G

    2011-12-01

    Full Text Available Data mining is nontrivial extraction of implicit, previously unknown and potential useful information from the data. For a database with number of records and for a set of classes such that each record belongs to one of the given classes, the problem of classification is to decide the class to which the given record belongs. The classification problem is also to generate a model for each class from given data set. We are going to make use of supervised classification in which we have training dataset of record, and for each record the class to which it belongs is known. There are many approaches to supervised classification. Decision tree is attractive in data mining environment as they represent rules. Rules can readily expressed in natural languages and they can be even mapped o database access languages. Now a days classification based on decision trees is one of the important problems in data mining   which has applications in many areas.  Now a days database system have become highly distributed, and we are using many paradigms. we consider the problem of inducing decision trees in a large distributed network of highly distributed databases. The classification based on decision tree can be done on the existence of distributed databases in healthcare and in bioinformatics, human computer interaction and by the view that these databases are soon to contain large amounts of data, characterized by its high dimensionality. Current decision tree algorithms would require high communication bandwidth, memory, and they are less efficient and scalability reduces when executed on such large volume of data. So there are some approaches being developed to improve the scalability and even approaches to analyse the data distributed over a network.[keywords: Data mining, Decision tree, decision tree induction, distributed data, classification

  2. Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xing Li

    2015-07-01

    Full Text Available With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people’s lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs. Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs, this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme.

  3. Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks.

    Science.gov (United States)

    Li, Xing; Chen, Dexin; Li, Chunyan; Wang, Liangmin

    2015-07-03

    With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people's lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme.

  4. Streaming-aware channel utilization improvement for wireless home networks

    NARCIS (Netherlands)

    Aslam, W.; Lukkien, J.J.

    2012-01-01

    A wireless network of consumer electronic (CE) devices in a modern home, is typically running streaming services with heterogeneous bandwidth demands. Satisfying these demands offers the challenge of mapping them efficiently onto scarce wireless channel bandwidth. This mapping is supported by the

  5. Capacity utilization in resilient wavelength-routed optical networks using link restoration

    DEFF Research Database (Denmark)

    Limal, Emmanuel; Danielsen, Søren Lykke; Stubkjær, Kristian

    1998-01-01

    The construction of resilient wavelength-routed optical networks has attracted much interest. Many network topologies, path and wavelength assignment strategies have been proposed. The assessment of network strategies is very complex and comparison is difficult. Here, we take a novel analytical...... approach in estimating the maximum capacity utilization that is possible in wavelength-division multiplexing (WDM) networks that are resilient against single link failures. The results apply to general network topologies and can therefore be used to evaluate the performance of more specific wavelength...

  6. Analysis of Utilization of Fecal Resources in Large-scale Livestock and Poultry Breeding in China

    Directory of Open Access Journals (Sweden)

    XUAN Meng

    2018-02-01

    Full Text Available The purpose of this paper is to develop a systematic investigation for the serious problems of livestock and poultry breeding in China and the technical demand of promoting the utilization of manure. Based on the status quo of large-scale livestock and poultry farming in typical areas in China, the work had been done beared on statistics and analysis of the modes and proportions of utilization of manure resources. Such a statistical method had been applied to the country -identified large -scale farm, which the total amount of pollutants reduction was in accordance with the "12th Five-Year Plan" standards. The results showed that there were some differences in the modes of resource utilization due to livestock and poultry manure at different scales and types:(1 Hogs, dairy cattle and beef cattle in total accounted for more than 75% of the agricultural manure storage;(2 Laying hens and broiler chickens accounted for about 65% of the total production of the organic manure produced by fecal production. It is demonstrated that the major modes of resource utilization of dung and urine were related to the natural characteristics, agricultural production methods, farming scale and economic development level in the area. It was concluded that the unreasonable planning, lacking of cleansing during breeding, false selection of manure utilizing modes were the major problems in China忆s large-scale livestock and poultry fecal resources utilization.

  7. Natural language acquisition in large scale neural semantic networks

    Science.gov (United States)

    Ealey, Douglas

    This thesis puts forward the view that a purely signal- based approach to natural language processing is both plausible and desirable. By questioning the veracity of symbolic representations of meaning, it argues for a unified, non-symbolic model of knowledge representation that is both biologically plausible and, potentially, highly efficient. Processes to generate a grounded, neural form of this model-dubbed the semantic filter-are discussed. The combined effects of local neural organisation, coincident with perceptual maturation, are used to hypothesise its nature. This theoretical model is then validated in light of a number of fundamental neurological constraints and milestones. The mechanisms of semantic and episodic development that the model predicts are then used to explain linguistic properties, such as propositions and verbs, syntax and scripting. To mimic the growth of locally densely connected structures upon an unbounded neural substrate, a system is developed that can grow arbitrarily large, data- dependant structures composed of individual self- organising neural networks. The maturational nature of the data used results in a structure in which the perception of concepts is refined by the networks, but demarcated by subsequent structure. As a consequence, the overall structure shows significant memory and computational benefits, as predicted by the cognitive and neural models. Furthermore, the localised nature of the neural architecture also avoids the increasing error sensitivity and redundancy of traditional systems as the training domain grows. The semantic and episodic filters have been demonstrated to perform as well, or better, than more specialist networks, whilst using significantly larger vocabularies, more complex sentence forms and more natural corpora.

  8. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Science.gov (United States)

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  9. Large-scale transportation network congestion evolution prediction using deep learning theory.

    Directory of Open Access Journals (Sweden)

    Xiaolei Ma

    Full Text Available Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS and Internet of Things (IoT, transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  10. Social Networks and High Healthcare Utilization: Building Resilience Through Analysis

    Science.gov (United States)

    2016-09-01

    attributes, such as gender and race. Focusing on individual members’ attributes in a social network seeks to identify common nodes and links, but may fail...response varied by individual paramedic assessment. Table 6 shows that requests for EMS services vary by complaint, that the dominant gender ...of disease and increased life expectancy across much of the globe; however, “noninfectious disease” and “social inequities of health” remain

  11. Novel methods of utilizing Jitter for Network Congestion Control

    Directory of Open Access Journals (Sweden)

    Ivan

    2013-12-01

    Full Text Available This paper proposes a novel paradigm for network congestion control. Instead of perpetual conflict as in TCP, a proof-of-concept first-ever protocol enabling inter-flow communication without infrastructure support thru a side channel constructed on generic FIFO queue behaviour is presented. This enables independent flows passing thru the same bottleneck queue to communicate and achieve fair capacity sharing and a stable equilibrium state in a rapid fashion.

  12. Just-in-time connectivity for large spiking networks.

    Science.gov (United States)

    Lytton, William W; Omurtag, Ahmet; Neymotin, Samuel A; Hines, Michael L

    2008-11-01

    The scale of large neuronal network simulations is memory limited due to the need to store connectivity information: connectivity storage grows as the square of neuron number up to anatomically relevant limits. Using the NEURON simulator as a discrete-event simulator (no integration), we explored the consequences of avoiding the space costs of connectivity through regenerating connectivity parameters when needed: just in time after a presynaptic cell fires. We explored various strategies for automated generation of one or more of the basic static connectivity parameters: delays, postsynaptic cell identities, and weights, as well as run-time connectivity state: the event queue. Comparison of the JitCon implementation to NEURON's standard NetCon connectivity method showed substantial space savings, with associated run-time penalty. Although JitCon saved space by eliminating connectivity parameters, larger simulations were still memory limited due to growth of the synaptic event queue. We therefore designed a JitEvent algorithm that added items to the queue only when required: instead of alerting multiple postsynaptic cells, a spiking presynaptic cell posted a callback event at the shortest synaptic delay time. At the time of the callback, this same presynaptic cell directly notified the first postsynaptic cell and generated another self-callback for the next delay time. The JitEvent implementation yielded substantial additional time and space savings. We conclude that just-in-time strategies are necessary for very large network simulations but that a variety of alternative strategies should be considered whose optimality will depend on the characteristics of the simulation to be run.

  13. Social network utilization (Facebook) & e-Professionalism among medical students.

    Science.gov (United States)

    Jawaid, Masood; Khan, Muhammad Hassaan; Bhutto, Shahzadi Nisar

    2015-01-01

    To find out the frequency and contents of online social networking (Facebook) among medical students of Dow University of Health Sciences. The sample of the study comprised of final year students of two medical colleges of Dow University of Health Sciences - Karachi. Systematic search for the face book profiles of the students was carried out with a new Facebook account. In the initial phase of search, it was determined whether each student had a Facebook account and the status of account as ''private'' ''intermediate'' or ''public'' was also sought. In the second phase of the study, objective information including gender, education, personal views, likes, tag pictures etc. were recorded for the publicly available accounts. An in depth qualitative content analysis of the public profiles of ten medical students, selected randomly with the help of random number generator technique was conducted. Social networking with Facebook is common among medical students with 66.9% having an account out of a total 535 students. One fifth of profiles 18.9% were publicly open, 36.6% profiles were private and 56.9% were identified to have an intermediate privacy setting, having customized settings for the profile information. In-depth analysis of some public profiles showed that potentially unprofessional material mostly related to violence and politics was posted by medical students. The usage of social network (Facebook) is very common among students of the university. Some unprofessional posts were also found on students' profiles mostly related to violence and politics.

  14. A Methodology for a Sustainable CO2 Capture and Utilization Network

    DEFF Research Database (Denmark)

    Frauzem, Rebecca; Fjellerup, Kasper; Gani, Rafiqul

    2015-01-01

    hydrogenation highlights the application. This case study illustrates the utility of the utilization network and elements of the methodology being developed. In addition, the conversion process is linked with carbon capture to evaluate the overall sustainability. Finally, the production of the other raw...... of Climate Change. New York: Cambridge University Press, 2007. [2] J. Wilcox, Carbon Capture. New York: Springer, 2012....

  15. A Utility-Based Downlink Radio Resource Allocation for Multiservice Cellular DS-CDMA Networks

    Directory of Open Access Journals (Sweden)

    Mahdi Shabany

    2007-03-01

    Full Text Available A novel framework is proposed to model downlink resource allocation problem in multiservice direct-sequence code division multiple-access (DS-CDMA cellular networks. This framework is based on a defined utility function, which leads to utilizing the network resources in a more efficient way. This utility function quantifies the degree of utilization of resources. As a matter of fact, using the defined utility function, users' channel fluctuations and their delay constraints along with the load conditions of all BSs are all taken into consideration. Unlike previous works, we solve the problem with the general objective of maximizing the total network utility instead of maximizing the achieved utility of each base station (BS. It is shown that this problem is equivalent to finding the optimum BS assignment throughout the network, which is mapped to a multidimensional multiple-choice knapsack problem (MMKP. Since MMKP is NP-hard, a polynomial-time suboptimal algorithm is then proposed to develop an efficient base-station assignment. Simulation results indicate a significant performance improvement in terms of achieved utility and packet drop ratio.

  16. Stock price change rate prediction by utilizing social network activities.

    Science.gov (United States)

    Deng, Shangkun; Mitsubuchi, Takashi; Sakurai, Akito

    2014-01-01

    Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS) before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL) and genetic algorithm (GA). MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques.

  17. Stock Price Change Rate Prediction by Utilizing Social Network Activities

    Directory of Open Access Journals (Sweden)

    Shangkun Deng

    2014-01-01

    Full Text Available Predicting stock price change rates for providing valuable information to investors is a challenging task. Individual participants may express their opinions in social network service (SNS before or after their transactions in the market; we hypothesize that stock price change rate is better predicted by a function of social network service activities and technical indicators than by a function of just stock market activities. The hypothesis is tested by accuracy of predictions as well as performance of simulated trading because success or failure of prediction is better measured by profits or losses the investors gain or suffer. In this paper, we propose a hybrid model that combines multiple kernel learning (MKL and genetic algorithm (GA. MKL is adopted to optimize the stock price change rate prediction models that are expressed in a multiple kernel linear function of different types of features extracted from different sources. GA is used to optimize the trading rules used in the simulated trading by fusing the return predictions and values of three well-known overbought and oversold technical indicators. Accumulated return and Sharpe ratio were used to test the goodness of performance of the simulated trading. Experimental results show that our proposed model performed better than other models including ones using state of the art techniques.

  18. Utilization of arterial blood gas measurements in a large tertiary care hospital.

    Science.gov (United States)

    Melanson, Stacy E F; Szymanski, Trevor; Rogers, Selwyn O; Jarolim, Petr; Frendl, Gyorgy; Rawn, James D; Cooper, Zara; Ferrigno, Massimo

    2007-04-01

    We describe the patterns of utilization of arterial blood gas (ABG) tests in a large tertiary care hospital. To our knowledge, no hospital-wide analysis of ABG test utilization has been published. We analyzed 491 ABG tests performed during 24 two-hour intervals, representative of different staff shifts throughout the 7-day week. The clinician ordering each ABG test was asked to fill out a utilization survey. The most common reasons for requesting an ABG test were changes in ventilator settings (27.6%), respiratory events (26.4%), and routine (25.7%). Of the results, approximately 79% were expected, and a change in patient management (eg, a change in ventilator settings) occurred in 42% of cases. Many ABG tests were ordered as part of a clinical routine or to monitor parameters that can be assessed clinically or through less invasive testing. Implementation of practice guidelines may prove useful in controlling test utilization and in decreasing costs.

  19. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunyang Lei

    2016-07-01

    Full Text Available Super dense wireless sensor networks (WSNs have become popular with the development of Internet of Things (IoT, Machine-to-Machine (M2M communications and Vehicular-to-Vehicular (V2V networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  20. A Low Collision and High Throughput Data Collection Mechanism for Large-Scale Super Dense Wireless Sensor Networks.

    Science.gov (United States)

    Lei, Chunyang; Bie, Hongxia; Fang, Gengfa; Gaura, Elena; Brusey, James; Zhang, Xuekun; Dutkiewicz, Eryk

    2016-07-18

    Super dense wireless sensor networks (WSNs) have become popular with the development of Internet of Things (IoT), Machine-to-Machine (M2M) communications and Vehicular-to-Vehicular (V2V) networks. While highly-dense wireless networks provide efficient and sustainable solutions to collect precise environmental information, a new channel access scheme is needed to solve the channel collision problem caused by the large number of competing nodes accessing the channel simultaneously. In this paper, we propose a space-time random access method based on a directional data transmission strategy, by which collisions in the wireless channel are significantly decreased and channel utility efficiency is greatly enhanced. Simulation results show that our proposed method can decrease the packet loss rate to less than 2 % in large scale WSNs and in comparison with other channel access schemes for WSNs, the average network throughput can be doubled.

  1. Soft network materials with isotropic negative Poisson's ratios over large strains.

    Science.gov (United States)

    Liu, Jianxing; Zhang, Yihui

    2018-01-31

    Auxetic materials with negative Poisson's ratios have important applications across a broad range of engineering areas, such as biomedical devices, aerospace engineering and automotive engineering. A variety of design strategies have been developed to achieve artificial auxetic materials with controllable responses in the Poisson's ratio. The development of designs that can offer isotropic negative Poisson's ratios over large strains can open up new opportunities in emerging biomedical applications, which, however, remains a challenge. Here, we introduce deterministic routes to soft architected materials that can be tailored precisely to yield the values of Poisson's ratio in the range from -1 to 1, in an isotropic manner, with a tunable strain range from 0% to ∼90%. The designs rely on a network construction in a periodic lattice topology, which incorporates zigzag microstructures as building blocks to connect lattice nodes. Combined experimental and theoretical studies on broad classes of network topologies illustrate the wide-ranging utility of these concepts. Quantitative mechanics modeling under both infinitesimal and finite deformations allows the development of a rigorous design algorithm that determines the necessary network geometries to yield target Poisson ratios over desired strain ranges. Demonstrative examples in artificial skin with both the negative Poisson's ratio and the nonlinear stress-strain curve precisely matching those of the cat's skin and in unusual cylindrical structures with engineered Poisson effect and shape memory effect suggest potential applications of these network materials.

  2. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Full-Duplex Communications in Large-Scale Cellular Networks

    KAUST Repository

    Alammouri, Ahmad

    2016-01-01

    /downlink interference. This thesis presents a tractable framework, based on stochastic geometry, to study FD communications in multi-tier cellular networks. Particularly, we assess the FD communications effect on the network performance and quantify the associated gains

  4. ASH : Tackling node mobility in large-scale networks

    NARCIS (Netherlands)

    Pruteanu, A.; Dulman, S.

    2012-01-01

    With the increased adoption of technologies likewireless sensor networks by real-world applications, dynamic network topologies are becoming the rule rather than the exception. Node mobility, however, introduces a range of problems (communication interference, path uncertainty, low quality of

  5. Utilizing HPC Network Technologies in High Energy Physics Experiments

    CERN Document Server

    AUTHOR|(CDS)2088631; The ATLAS collaboration

    2017-01-01

    Because of their performance characteristics high-performance fabrics like Infiniband or OmniPath are interesting technologies for many local area network applications, including data acquisition systems for high-energy physics experiments like the ATLAS experiment at CERN. This paper analyzes existing APIs for high-performance fabrics and evaluates their suitability for data acquisition systems in terms of performance and domain applicability. The study finds that existing software APIs for high-performance interconnects are focused on applications in high-performance computing with specific workloads and are not compatible with the requirements of data acquisition systems. To evaluate the use of high-performance interconnects in data acquisition systems a custom library, NetIO, is presented and compared against existing technologies. NetIO has a message queue-like interface which matches the ATLAS use case better than traditional HPC APIs like MPI. The architecture of NetIO is based on a interchangeable bac...

  6. On the rejection-based algorithm for simulation and analysis of large-scale reaction networks

    Energy Technology Data Exchange (ETDEWEB)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)

    2015-06-28

    Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.

  7. Large deviations for Gaussian queues modelling communication networks

    CERN Document Server

    Mandjes, Michel

    2007-01-01

    Michel Mandjes, Centre for Mathematics and Computer Science (CWI) Amsterdam, The Netherlands, and Professor, Faculty of Engineering, University of Twente. At CWI Mandjes is a senior researcher and Director of the Advanced Communications Network group.  He has published for 60 papers on queuing theory, networks, scheduling, and pricing of networks.

  8. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  9. Enabling parallel simulation of large-scale HPC network systems

    International Nuclear Information System (INIS)

    Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip

    2016-01-01

    Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations

  10. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-01

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  11. Large epidemic thresholds emerge in heterogeneous networks of heterogeneous nodes.

    Science.gov (United States)

    Yang, Hui; Tang, Ming; Gross, Thilo

    2015-08-21

    One of the famous results of network science states that networks with heterogeneous connectivity are more susceptible to epidemic spreading than their more homogeneous counterparts. In particular, in networks of identical nodes it has been shown that network heterogeneity, i.e. a broad degree distribution, can lower the epidemic threshold at which epidemics can invade the system. Network heterogeneity can thus allow diseases with lower transmission probabilities to persist and spread. However, it has been pointed out that networks in which the properties of nodes are intrinsically heterogeneous can be very resilient to disease spreading. Heterogeneity in structure can enhance or diminish the resilience of networks with heterogeneous nodes, depending on the correlations between the topological and intrinsic properties. Here, we consider a plausible scenario where people have intrinsic differences in susceptibility and adapt their social network structure to the presence of the disease. We show that the resilience of networks with heterogeneous connectivity can surpass those of networks with homogeneous connectivity. For epidemiology, this implies that network heterogeneity should not be studied in isolation, it is instead the heterogeneity of infection risk that determines the likelihood of outbreaks.

  12. Electrolarynx Voice Recognition Utilizing Pulse Coupled Neural Network

    Directory of Open Access Journals (Sweden)

    Fatchul Arifin

    2010-08-01

    Full Text Available The laryngectomies patient has no ability to speak normally because their vocal chords have been removed. The easiest option for the patient to speak again is by using electrolarynx speech. This tool is placed on the lower chin. Vibration of the neck while speaking is used to produce sound. Meanwhile, the technology of "voice recognition" has been growing very rapidly. It is expected that the technology of "voice recognition" can also be used by laryngectomies patients who use electrolarynx.This paper describes a system for electrolarynx speech recognition. Two main parts of the system are feature extraction and pattern recognition. The Pulse Coupled Neural Network – PCNN is used to extract the feature and characteristic of electrolarynx speech. Varying of β (one of PCNN parameter also was conducted. Multi layer perceptron is used to recognize the sound patterns. There are two kinds of recognition conducted in this paper: speech recognition and speaker recognition. The speech recognition recognizes specific speech from every people. Meanwhile, speaker recognition recognizes specific speech from specific person. The system ran well. The "electrolarynx speech recognition" has been tested by recognizing of “A” and "not A" voice. The results showed that the system had 94.4% validation. Meanwhile, the electrolarynx speaker recognition has been tested by recognizing of “saya” voice from some different speakers. The results showed that the system had 92.2% validation. Meanwhile, the best β parameter of PCNN for electrolarynx recognition is 3.

  13. Large-scale utilization of wind power in China: Obstacles of conflict between market and planning

    International Nuclear Information System (INIS)

    Zhao Xiaoli; Wang Feng; Wang Mei

    2012-01-01

    The traditional strict planning system that regulates China's power market dominates power industry operations. However, a series of market-oriented reforms since 1997 call for more decentralized decision-making by individual market participants. Moreover, with the rapid growth of wind power in China, the strict planning system has become one of the significant factors that has curtailed the generation of wind power, which contradicts with the original purpose of using the government's strong control abilities to promote wind power development. In this paper, we first present the reasons why market mechanisms are important for large-scale utilization of wind power by using a case analysis of the Northeast Grid, and then we illustrate the impact of conflicts between strict planning and market mechanisms on large-scale wind power utilization. Last, we explore how to promote coordination between markets and planning to realize large-scale wind power utilization in China. We argue that important measures include implementing flexible power pricing mechanisms instead of the current fixed pricing approach, formulating a more reasonable mechanism for distributing benefits and costs, and designing an appropriate market structure for large-scale wind power utilization to promote market liquidity and to send clear market equilibrium signals. - Highlights: ► We present the reasons why market is important for utilization of wind power. ► We discuss the current situation of the conflict between planning and market. ► We study the impact of conflict between planning and market on wind power output. ► We argue how to promote coordination between market and planning.

  14. Modeling a Large Data Acquisition Network in a Simulation Framework

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer

    2015-01-01

    The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...

  15. Utilization of extended bayesian networks in decision making under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Van Eeckhout, Edward M [Los Alamos National Laboratory; Leishman, Deborah A [Los Alamos National Laboratory; Gibson, William L [Los Alamos National Laboratory

    2009-01-01

    Bayesian network tool (called IKE for Integrated Knowledge Engine) has been developed to assess the probability of undesirable events. The tool allows indications and observables from sensors and/or intelligence to feed directly into hypotheses of interest, thus allowing one to quantify the probability and uncertainty of these events resulting from very disparate evidence. For example, the probability that a facility is processing nuclear fuel or assembling a weapon can be assessed by examining the processes required, establishing the observables that should be present, then assembling information from intelligence, sensors and other information sources related to the observables. IKE also has the capability to determine tasking plans, that is, prioritize which observable should be collected next to most quickly ascertain the 'true' state and drive the probability toward 'zero' or 'one.' This optimization capability is called 'evidence marshaling.' One example to be discussed is a denied facility monitoring situation; there is concern that certain process(es) are being executed at the site (due to some intelligence or other data). We will show how additional pieces of evidence will then ascertain with some degree of certainty the likelihood of this process(es) as each piece of evidence is obtained. This example shows how both intelligence and sensor data can be incorporated into the analysis. A second example involves real-time perimeter security. For this demonstration we used seismic, acoustic, and optical sensors linked back to IKE. We show how these sensors identified and assessed the likelihood of 'intruder' versus friendly vehicles.

  16. Customer-oriented risk assessment in network utilities

    International Nuclear Information System (INIS)

    Gómez Fernández, Juan F.; Márquez, Adolfo Crespo; López-Campos, Mónica A.

    2016-01-01

    For companies that distribute services such as telecommunications, water, energy, gas, etc., quality perceived by the customers has a strong impact on the fulfillment of financial goals, positively increasing the demand and negatively increasing the risk of customer churn (loss of customers). Failures by these companies may cause customer affection in a massive way, augmenting the intention to leave the company. Therefore, maintenance performance and specifically service reliability has a strong influence on financial goals. This paper proposes a methodology to evaluate the contribution of the maintenance department in economic terms, based on service unreliability by network failures. The developed methodology aims to provide an analysis of failures to facilitate decision making about maintenance (preventive/predictive and corrective) costs versus negative impacts in end-customer invoicing based on the probability of losing customers. Survival analysis of recurrent failures with the General Renewal Process distribution is used for this novel purpose with the intention to be applied as a standard procedure to calculate the expected maintenance financial impact, for a given period of time. Also, geographical areas of coverage are distinguished, enabling the comparison of different technical or management alternatives. Two case studies in a telecommunications services company are presented in order to illustrate the applicability of the methodology. - Highlights: • Reliability and reparability impact the rate of abandonment of customers. • Expected reliability and interruptions must be contemplated in services contracts. • Preventive maintenance reduces the risk of abandonment, besides corrective costs. • Analysis of investment in service reliability vs. impact on customer retention. • Reliability of services has a positive impact in business financial situation.

  17. Composition and structure of a large online social network in The Netherlands.

    Directory of Open Access Journals (Sweden)

    Rense Corten

    Full Text Available Limitations in data collection have long been an obstacle in research on friendship networks. Most earlier studies use either a sample of ego-networks, or complete network data on a relatively small group (e.g., a single organization. The rise of online social networking services such as Friendster and Facebook, however, provides researchers with opportunities to study friendship networks on a much larger scale. This study uses complete network data from Hyves, a popular online social networking service in The Netherlands, comprising over eight million members and over 400 million online friendship relations. In the first study of its kind for The Netherlands, I examine the structure of this network in terms of the degree distribution, characteristic path length, clustering, and degree assortativity. Results indicate that this network shares features of other large complex networks, but also deviates in other respects. In addition, a comparison with other online social networks shows that these networks show remarkable similarities.

  18. Composition and structure of a large online social network in The Netherlands.

    Science.gov (United States)

    Corten, Rense

    2012-01-01

    Limitations in data collection have long been an obstacle in research on friendship networks. Most earlier studies use either a sample of ego-networks, or complete network data on a relatively small group (e.g., a single organization). The rise of online social networking services such as Friendster and Facebook, however, provides researchers with opportunities to study friendship networks on a much larger scale. This study uses complete network data from Hyves, a popular online social networking service in The Netherlands, comprising over eight million members and over 400 million online friendship relations. In the first study of its kind for The Netherlands, I examine the structure of this network in terms of the degree distribution, characteristic path length, clustering, and degree assortativity. Results indicate that this network shares features of other large complex networks, but also deviates in other respects. In addition, a comparison with other online social networks shows that these networks show remarkable similarities.

  19. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  20. Hierarchical and Matrix Structures in a Large Organizational Email Network: Visualization and Modeling Approaches

    OpenAIRE

    Sims, Benjamin H.; Sinitsyn, Nikolai; Eidenbenz, Stephan J.

    2014-01-01

    This paper presents findings from a study of the email network of a large scientific research organization, focusing on methods for visualizing and modeling organizational hierarchies within large, complex network datasets. In the first part of the paper, we find that visualization and interpretation of complex organizational network data is facilitated by integration of network data with information on formal organizational divisions and levels. By aggregating and visualizing email traffic b...

  1. An inkjet-printed UWB antenna on paper substrate utilizing a novel fractal matching network

    KAUST Repository

    Cook, Benjamin Stassen

    2012-07-01

    In this work, the smallest reported inkjet-printed UWB antenna is proposed that utilizes a fractal matching network to increase the performance of a UWB microstrip monopole. The antenna is inkjet-printed on a paper substrate to demonstrate the ability to produce small and low-cost UWB antennas with inkjet-printing technology which can enable compact, low-cost, and environmentally friendly wireless sensor network. © 2012 IEEE.

  2. Gross domestic product estimation based on electricity utilization by artificial neural network

    Science.gov (United States)

    Stevanović, Mirjana; Vujičić, Slađana; Gajić, Aleksandar M.

    2018-01-01

    The main goal of the paper was to estimate gross domestic product (GDP) based on electricity estimation by artificial neural network (ANN). The electricity utilization was analyzed based on different sources like renewable, coal and nuclear sources. The ANN network was trained with two training algorithms namely extreme learning method and back-propagation algorithm in order to produce the best prediction results of the GDP. According to the results it can be concluded that the ANN model with extreme learning method could produce the acceptable prediction of the GDP based on the electricity utilization.

  3. Experimental demonstration of large capacity WSDM optical access network with multicore fibers and advanced modulation formats.

    Science.gov (United States)

    Li, Borui; Feng, Zhenhua; Tang, Ming; Xu, Zhilin; Fu, Songnian; Wu, Qiong; Deng, Lei; Tong, Weijun; Liu, Shuang; Shum, Perry Ping

    2015-05-04

    Towards the next generation optical access network supporting large capacity data transmission to enormous number of users covering a wider area, we proposed a hybrid wavelength-space division multiplexing (WSDM) optical access network architecture utilizing multicore fibers with advanced modulation formats. As a proof of concept, we experimentally demonstrated a WSDM optical access network with duplex transmission using our developed and fabricated multicore (7-core) fibers with 58.7km distance. As a cost-effective modulation scheme for access network, the optical OFDM-QPSK signal has been intensity modulated on the downstream transmission in the optical line terminal (OLT) and it was directly detected in the optical network unit (ONU) after MCF transmission. 10 wavelengths with 25GHz channel spacing from an optical comb generator are employed and each wavelength is loaded with 5Gb/s OFDM-QPSK signal. After amplification, power splitting, and fan-in multiplexer, 10-wavelength downstream signal was injected into six outer layer cores simultaneously and the aggregation downstream capacity reaches 300 Gb/s. -16 dBm sensitivity has been achieved for 3.8 × 10-3 bit error ratio (BER) with 7% Forward Error Correction (FEC) limit for all wavelengths in every core. Upstream signal from ONU side has also been generated and the bidirectional transmission in the same core causes negligible performance degradation to the downstream signal. As a universal platform for wired/wireless data access, our proposed architecture provides additional dimension for high speed mobile signal transmission and we hence demonstrated an upstream delivery of 20Gb/s per wavelength with QPSK modulation formats using the inner core of MCF emulating a mobile backhaul service. The IQ modulated data was coherently detected in the OLT side. -19 dBm sensitivity has been achieved under the FEC limit and more than 18 dB power budget is guaranteed.

  4. Wireless multi-hop networks with stealing : large buffer asymptotics

    NARCIS (Netherlands)

    Guillemin, F.; Knessl, C.; Leeuwaarden, van J.S.H.

    2010-01-01

    Wireless networks equipped with CSMA are scheduled in a fully distributed manner. A disadvantage of such distributed control in multi-hop networks is the hidden node problem that causes the effect of stealing, in which a downstream node steals the channel from an upstream node with probability p.

  5. Network Partitioning Domain Knowledge Multiobjective Application Mapping for Large-Scale Network-on-Chip

    Directory of Open Access Journals (Sweden)

    Yin Zhen Tei

    2014-01-01

    Full Text Available This paper proposes a multiobjective application mapping technique targeted for large-scale network-on-chip (NoC. As the number of intellectual property (IP cores in multiprocessor system-on-chip (MPSoC increases, NoC application mapping to find optimum core-to-topology mapping becomes more challenging. Besides, the conflicting cost and performance trade-off makes multiobjective application mapping techniques even more complex. This paper proposes an application mapping technique that incorporates domain knowledge into genetic algorithm (GA. The initial population of GA is initialized with network partitioning (NP while the crossover operator is guided with knowledge on communication demands. NP reduces the large-scale application mapping complexity and provides GA with a potential mapping search space. The proposed genetic operator is compared with state-of-the-art genetic operators in terms of solution quality. In this work, multiobjective optimization of energy and thermal-balance is considered. Through simulation, knowledge-based initial mapping shows significant improvement in Pareto front compared to random initial mapping that is widely used. The proposed knowledge-based crossover also shows better Pareto front compared to state-of-the-art knowledge-based crossover.

  6. Approximating spectral impact of structural perturbations in large networks

    CERN Document Server

    Milanese, A; Nishikawa, Takashi; Sun, Jie

    2010-01-01

    Determining the effect of structural perturbations on the eigenvalue spectra of networks is an important problem because the spectra characterize not only their topological structures, but also their dynamical behavior, such as synchronization and cascading processes on networks. Here we develop a theory for estimating the change of the largest eigenvalue of the adjacency matrix or the extreme eigenvalues of the graph Laplacian when small but arbitrary set of links are added or removed from the network. We demonstrate the effectiveness of our approximation schemes using both real and artificial networks, showing in particular that we can accurately obtain the spectral ranking of small subgraphs. We also propose a local iterative scheme which computes the relative ranking of a subgraph using only the connectivity information of its neighbors within a few links. Our results may not only contribute to our theoretical understanding of dynamical processes on networks, but also lead to practical applications in ran...

  7. Explicit integration with GPU acceleration for large kinetic networks

    International Nuclear Information System (INIS)

    Brock, Benjamin; Belt, Andrew; Billings, Jay Jay; Guidry, Mike

    2015-01-01

    We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in computation time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems in various scientific and technical fields that were intractable, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.

  8. Impact of Large Scale Energy Efficiency Programs On Consumer Tariffs and Utility Finances in India

    Energy Technology Data Exchange (ETDEWEB)

    Abhyankar, Nikit; Phadke, Amol

    2011-01-20

    Large-scale EE programs would modestly increase tariffs but reduce consumers' electricity bills significantly. However, the primary benefit of EE programs is a significant reduction in power shortages, which might make these programs politically acceptable even if tariffs increase. To increase political support, utilities could pursue programs that would result in minimal tariff increases. This can be achieved in four ways: (a) focus only on low-cost programs (such as replacing electric water heaters with gas water heaters); (b) sell power conserved through the EE program to the market at a price higher than the cost of peak power purchase; (c) focus on programs where a partial utility subsidy of incremental capital cost might work and (d) increase the number of participant consumers by offering a basket of EE programs to fit all consumer subcategories and tariff tiers. Large scale EE programs can result in consistently negative cash flows and significantly erode the utility's overall profitability. In case the utility is facing shortages, the cash flow is very sensitive to the marginal tariff of the unmet demand. This will have an important bearing on the choice of EE programs in Indian states where low-paying rural and agricultural consumers form the majority of the unmet demand. These findings clearly call for a flexible, sustainable solution to the cash-flow management issue. One option is to include a mechanism like FAC in the utility incentive mechanism. Another sustainable solution might be to have the net program cost and revenue loss built into utility's revenue requirement and thus into consumer tariffs up front. However, the latter approach requires institutionalization of EE as a resource. The utility incentive mechanisms would be able to address the utility disincentive of forgone long-run return but have a minor impact on consumer benefits. Fundamentally, providing incentives for EE programs to make them comparable to supply

  9. Decentralized State-Observer-Based Traffic Density Estimation of Large-Scale Urban Freeway Network by Dynamic Model

    Directory of Open Access Journals (Sweden)

    Yuqi Guo

    2017-08-01

    Full Text Available In order to estimate traffic densities in a large-scale urban freeway network in an accurate and timely fashion when traffic sensors do not cover the freeway network completely and thus only local measurement data can be utilized, this paper proposes a decentralized state observer approach based on a macroscopic traffic flow model. Firstly, by using the well-known cell transmission model (CTM, the urban freeway network is modeled in the way of distributed systems. Secondly, based on the model, a decentralized observer is designed. With the help of the Lyapunov function and S-procedure theory, the observer gains are computed by using linear matrix inequality (LMI technique. So, the traffic densities of the whole road network can be estimated by the designed observer. Finally, this method is applied to the outer ring of the Beijing’s second ring road and experimental results demonstrate the effectiveness and applicability of the proposed approach.

  10. Clustering in large networks does not promote upstream reciprocity.

    Directory of Open Access Journals (Sweden)

    Naoki Masuda

    Full Text Available Upstream reciprocity (also called generalized reciprocity is a putative mechanism for cooperation in social dilemma situations with which players help others when they are helped by somebody else. It is a type of indirect reciprocity. Although upstream reciprocity is often observed in experiments, most theories suggest that it is operative only when players form short cycles such as triangles, implying a small population size, or when it is combined with other mechanisms that promote cooperation on their own. An expectation is that real social networks, which are known to be full of triangles and other short cycles, may accommodate upstream reciprocity. In this study, I extend the upstream reciprocity game proposed for a directed cycle by Boyd and Richerson to the case of general networks. The model is not evolutionary and concerns the conditions under which the unanimity of cooperative players is a Nash equilibrium. I show that an abundance of triangles or other short cycles in a network does little to promote upstream reciprocity. Cooperation is less likely for a larger population size even if triangles are abundant in the network. In addition, in contrast to the results for evolutionary social dilemma games on networks, scale-free networks lead to less cooperation than networks with a homogeneous degree distribution.

  11. Clustering in large networks does not promote upstream reciprocity.

    Science.gov (United States)

    Masuda, Naoki

    2011-01-01

    Upstream reciprocity (also called generalized reciprocity) is a putative mechanism for cooperation in social dilemma situations with which players help others when they are helped by somebody else. It is a type of indirect reciprocity. Although upstream reciprocity is often observed in experiments, most theories suggest that it is operative only when players form short cycles such as triangles, implying a small population size, or when it is combined with other mechanisms that promote cooperation on their own. An expectation is that real social networks, which are known to be full of triangles and other short cycles, may accommodate upstream reciprocity. In this study, I extend the upstream reciprocity game proposed for a directed cycle by Boyd and Richerson to the case of general networks. The model is not evolutionary and concerns the conditions under which the unanimity of cooperative players is a Nash equilibrium. I show that an abundance of triangles or other short cycles in a network does little to promote upstream reciprocity. Cooperation is less likely for a larger population size even if triangles are abundant in the network. In addition, in contrast to the results for evolutionary social dilemma games on networks, scale-free networks lead to less cooperation than networks with a homogeneous degree distribution.

  12. Utilizing social networking sites to promote adolescents' health: a pragmatic review of the literature.

    Science.gov (United States)

    Francomano, Jesse A; Harpin, Scott B

    2015-01-01

    Social networking site use has exploded among youth in the last few years and is being adapted as an important tool for healthcare interventions and serving as a platform for adolescents to gain access to health information. The aim of this study was to examine the strengths, weaknesses, and best practices of utilizing Facebook in adolescent health promotion and research via pragmatic literature review. We also examine how sites can facilitate ethically sound healthcare for adolescents, particularly at-risk youth. We conducted a literature review of health and social sciences literature from the past 5 years related to adolescent health and social network site use. Publications were grouped by shared content then categorized by themes. Five themes emerged: access to healthcare information, peer support and networking, risk and benefits of social network site use in care delivery, overcoming technological barriers, and social network site interventions. More research is needed to better understand how such Web sites can be better utilized to provide access to adolescents seeking healthcare. Given the broad reach of social network sites, all health information must be closely monitored for accurate, safe distribution. Finally, consent and privacy issues are omnipresent in social network sites, which calls for standards of ethical use.

  13. Utilizing social networks, blogging and YouTube in allergy and immunology practices.

    Science.gov (United States)

    Dimov, Ves; Eidelman, Frank

    2015-01-01

    Online social networks are used to connect with friends and family members, and increasingly, to stay up-to-date with the latest news and developments in allergy and immunology. As communication is a central part of healthcare delivery, the utilization of such networking channels in allergy and immunology will continue to grow. There are inherent risks to online social networks related to breaches of patient confidentiality, professionalism and privacy. Malpractice and liability risks should also be considered. There is a paucity of information in the literature on how social network interventions affect patient outcomes. The allergy and immunology community should direct future studies towards investigating how the use of social networks and other technology tools and services can improve patient care.

  14. Selective vulnerability related to aging in large-scale resting brain networks.

    Science.gov (United States)

    Zhang, Hong-Ying; Chen, Wen-Xin; Jiao, Yun; Xu, Yao; Zhang, Xiang-Rong; Wu, Jing-Tao

    2014-01-01

    Normal aging is associated with cognitive decline. Evidence indicates that large-scale brain networks are affected by aging; however, it has not been established whether aging has equivalent effects on specific large-scale networks. In the present study, 40 healthy subjects including 22 older (aged 60-80 years) and 18 younger (aged 22-33 years) adults underwent resting-state functional MRI scanning. Four canonical resting-state networks, including the default mode network (DMN), executive control network (ECN), dorsal attention network (DAN) and salience network, were extracted, and the functional connectivities in these canonical networks were compared between the younger and older groups. We found distinct, disruptive alterations present in the large-scale aging-related resting brain networks: the ECN was affected the most, followed by the DAN. However, the DMN and salience networks showed limited functional connectivity disruption. The visual network served as a control and was similarly preserved in both groups. Our findings suggest that the aged brain is characterized by selective vulnerability in large-scale brain networks. These results could help improve our understanding of the mechanism of degeneration in the aging brain. Additional work is warranted to determine whether selective alterations in the intrinsic networks are related to impairments in behavioral performance.

  15. Range-Free Localization Schemes for Large Scale Sensor Networks

    National Research Council Canada - National Science Library

    He, Tian; Huang, Chengdu; Blum, Brain M; Stankovic, John A; Abdelzaher, Tarek

    2003-01-01

    .... Because coarse accuracy is sufficient for most sensor network applications, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches...

  16. Managing Virtual Networks on Large-Scale Projects

    National Research Council Canada - National Science Library

    Noll, David

    2006-01-01

    The complexity of Boeing's 787 Program is too great for the formal planned information and communication network structure to fully meet the needs of companies, managers, and employees located throughout the world...

  17. Implementation of Cyberinfrastructure and Data Management Workflow for a Large-Scale Sensor Network

    Science.gov (United States)

    Jones, A. S.; Horsburgh, J. S.

    2014-12-01

    Monitoring with in situ environmental sensors and other forms of field-based observation presents many challenges for data management, particularly for large-scale networks consisting of multiple sites, sensors, and personnel. The availability and utility of these data in addressing scientific questions relies on effective cyberinfrastructure that facilitates transformation of raw sensor data into functional data products. It also depends on the ability of researchers to share and access the data in useable formats. In addition to addressing the challenges presented by the quantity of data, monitoring networks need practices to ensure high data quality, including procedures and tools for post processing. Data quality is further enhanced if practitioners are able to track equipment, deployments, calibrations, and other events related to site maintenance and associate these details with observational data. In this presentation we will describe the overall workflow that we have developed for research groups and sites conducting long term monitoring using in situ sensors. Features of the workflow include: software tools to automate the transfer of data from field sites to databases, a Python-based program for data quality control post-processing, a web-based application for online discovery and visualization of data, and a data model and web interface for managing physical infrastructure. By automating the data management workflow, the time from collection to analysis is reduced and sharing and publication is facilitated. The incorporation of metadata standards and descriptions and the use of open-source tools enhances the sustainability and reusability of the data. We will describe the workflow and tools that we have developed in the context of the iUTAH (innovative Urban Transitions and Aridregion Hydrosustainability) monitoring network. The iUTAH network consists of aquatic and climate sensors deployed in three watersheds to monitor Gradients Along Mountain to Urban

  18. Environmental versatility promotes modularity in large scale metabolic networks

    OpenAIRE

    Samal A.; Wagner Andreas; Martin O.C.

    2011-01-01

    Abstract Background The ubiquity of modules in biological networks may result from an evolutionary benefit of a modular organization. For instance, modularity may increase the rate of adaptive evolution, because modules can be easily combined into new arrangements that may benefit their carrier. Conversely, modularity may emerge as a by-product of some trait. We here ask whether this last scenario may play a role in genome-scale metabolic networks that need to sustain life in one or more chem...

  19. Allocating service parts in two-echelon networks at a utility company

    NARCIS (Netherlands)

    van den Berg, D.; van der Heijden, Matthijs C.; Schuur, Peter

    2014-01-01

    We study a multi-item, two-echelon, continuous-review inventory problem at a Dutch utility company, Liander. We develop a model that optimizes the quantities of service parts and their allocation in the two-echelon network under an aggregate waiting time restriction. Specific aspects that we address

  20. A relative rate utility based distributed power allocation algorithm for Cognitive Radio Networks

    DEFF Research Database (Denmark)

    Mahmood, Nurul Huda; Øien, G.E.; Lundheim, L.

    2012-01-01

    In an underlay Cognitive Radio Network, multiple secondary users coexist geographically and spectrally with multiple primary users under a constraint on the maximum received interference power at the primary receivers. Given such a setting, one may ask "how to achieve maximum utility benefit...

  1. How an existing telecommunications network can support the deployment of smart meters in a water utility?

    Directory of Open Access Journals (Sweden)

    Samuel de Barros Moraes

    2015-12-01

    Full Text Available This case study, based on interviews and technical analysis of a Brazilian water utility with more than 10 million clients, aims to understand what kind of adjusts on a telecommunications network, developed for operational and corporate use, demands to support a smart metering system, identifying this synergies and challenges.

  2. Architecture and design of optical path networks utilizing waveband virtual links

    Science.gov (United States)

    Ito, Yusaku; Mori, Yojiro; Hasegawa, Hiroshi; Sato, Ken-ichi

    2016-02-01

    We propose a novel optical network architecture that uses waveband virtual links, each of which can carry several optical paths, to directly bridge distant node pairs. Future photonic networks should not only transparently cover extended areas but also expand fiber capacity. However, the traversal of many ROADM nodes impairs the optical signal due to spectrum narrowing. To suppress the degradation, the bandwidth of guard bands needs to be increased, which degrades fiber frequency utilization. Waveband granular switching allows us to apply broader pass-band filtering at ROADMs and to insert sufficient guard bands between wavebands with minimum frequency utilization offset. The scheme resolves the severe spectrum narrowing effect. Moreover, the guard band between optical channels in a waveband can be minimized, which increases the number of paths that can be accommodated per fiber. In the network, wavelength path granular routing is done without utilizing waveband virtual links, and it still suffers from spectrum narrowing. A novel network design algorithm that can bound the spectrum narrowing effect by limiting the number of hops (traversed nodes that need wavelength path level routing) is proposed in this paper. This algorithm dynamically changes the waveband virtual link configuration according to the traffic distribution variation, where optical paths that need many node hops are effectively carried by virtual links. Numerical experiments demonstrate that the number of necessary fibers is reduced by 23% compared with conventional optical path networks.

  3. Status of Utilizing Social Media Networks in the Teaching-Learning Process at Public Jordanian Universities

    Directory of Open Access Journals (Sweden)

    Muneera Abdalkareem Alshdefait

    2018-03-01

    Full Text Available This study aimed at finding out the status of utilizing social media networks in the teaching-learning process at public Jordanian Universities. To achieve the goal of the study, the descriptive developmental method was used and a questionnaire was developed, consisting of (35 statements. The questionnaire was checked for its validity and reliability. Then it was distributed to a sample of (382 male and female students from the undergraduate and graduate levels. The study results showed that the participants gave a low score to the status of utilizing social media networks in the teaching-learning process at public Jordanian universities. The results also showed that there were statistically significant differences between the participants of the study according to the academic rank attributed to the graduate students, and according to gender attributed to male students at the instrument macro level and on all dimensions of the two variables. In light of these results, the study recommended that public universities should utilize modern technology in the educational process, urge and encourage the teaching staff members to use the social media networks in the teaching-learning process and raise the students' awareness about the benefits of using social media networks. Keywords: Social media networks, Teaching-learning process, Public Jordanian Universities

  4. Flash flood prediction in large dams using neural networks

    Science.gov (United States)

    Múnera Estrada, J. C.; García Bartual, R.

    2009-04-01

    A flow forecasting methodology is presented as a support tool for flood management in large dams. The practical and efficient use of hydrological real-time measurements is necessary to operate early warning systems for flood disasters prevention, either in natural catchments or in those regulated with reservoirs. In this latter case, the optimal dam operation during flood scenarios should reduce the downstream risks, and at the same time achieve a compromise between different goals: structural security, minimize predictions uncertainty and water resources system management objectives. Downstream constraints depend basically on the geomorphology of the valley, the critical flow thresholds for flooding, the land use and vulnerability associated with human settlements and their economic activities. A dam operation during a flood event thus requires appropriate strategies depending on the flood magnitude and the initial freeboard at the reservoir. The most important difficulty arises from the inherently stochastic character of peak rainfall intensities, their strong spatial and temporal variability, and the highly nonlinear response of semiarid catchments resulting from initial soil moisture condition and the dominant flow mechanisms. The practical integration of a flow prediction model in a real-time system should include combined techniques of pre-processing, data verification and completion, assimilation of information and implementation of real time filters depending on the system characteristics. This work explores the behaviour of real-time flood forecast algorithms based on artificial neural networks (ANN) techniques, in the River Meca catchment (Huelva, Spain), regulated by El Sancho dam. The dam is equipped with three Taintor gates of 12x6 meters. The hydrological data network includes five high-resolution automatic pluviometers (dt=10 min) and three high precision water level sensors in the reservoir. A cross correlation analysis between precipitation data

  5. Mobile Virtual Network Operator Information Systems for Increased Sustainability in Utilities

    DEFF Research Database (Denmark)

    Joensen, Hallur Leivsgard; Tambo, Torben

    2011-01-01

    sales from efficiency of business processes, underlying information systems, and the ability to make the link from consumption to cost visual and transparent to consumers. The conclusion is that the energy sector should look into other sectors and learn from information systems which ease up business......, sales and buying processes are separated from physical networks and energy production. This study aims to characterise and evaluate information systems supporting the transformation of the free market-orientation of energy and provision of utilities in a cross-sectorial proposition known as Mobile...... Virtual Network Operator (MVNO). Emphasis is particularly on standardised information systems for automatically linking consumers, sellers and integration of network infrastructure actors. The method used is a feasibility study assessing business and information processes of a forthcoming utilities market...

  6. Unscheduled load flow effect due to large variation in the distributed generation in a subtransmission network

    Science.gov (United States)

    Islam, Mujahidul

    A sustainable energy delivery infrastructure implies the safe and reliable accommodation of large scale penetration of renewable sources in the power grid. In this dissertation it is assumed there will be no significant change in the power transmission and distribution structure currently in place; except in the operating strategy and regulatory policy. That is to say, with the same old structure, the path towards unveiling a high penetration of switching power converters in the power system will be challenging. Some of the dimensions of this challenge are power quality degradation, frequent false trips due to power system imbalance, and losses due to a large neutral current. The ultimate result is the reduced life of many power distribution components - transformers, switches and sophisticated loads. Numerous ancillary services are being developed and offered by the utility operators to mitigate these problems. These services will likely raise the system's operational cost, not only from the utility operators' end, but also reflected on the Independent System Operators and by the Regional Transmission Operators (RTO) due to an unforeseen backlash of frequent variation in the load-side generation or distributed generation. The North American transmission grid is an interconnected system similar to a large electrical circuit. This circuit was not planned but designed over 100 years. The natural laws of physics govern the power flow among loads and generators except where control mechanisms are installed. The control mechanism has not matured enough to withstand the high penetration of variable generators at uncontrolled distribution ends. Unlike a radial distribution system, mesh or loop networks can alleviate complex channels for real and reactive power flow. Significant variation in real power injection and absorption on the distribution side can emerge as a bias signal on the routing reactive power in some physical links or channels that are not distinguishable

  7. Spatial dependencies between large-scale brain networks.

    Directory of Open Access Journals (Sweden)

    Robert Leech

    Full Text Available Functional neuroimaging reveals both increases (task-positive and decreases (task-negative in neural activation with many tasks. Many studies show a temporal relationship between task positive and task negative networks that is important for efficient cognitive functioning. Here we provide evidence for a spatial relationship between task positive and negative networks. There are strong spatial similarities between many reported task negative brain networks, termed the default mode network, which is typically assumed to be a spatially fixed network. However, this is not the case. The spatial structure of the DMN varies depending on what specific task is being performed. We test whether there is a fundamental spatial relationship between task positive and negative networks. Specifically, we hypothesize that the distance between task positive and negative voxels is consistent despite different spatial patterns of activation and deactivation evoked by different cognitive tasks. We show significantly reduced variability in the distance between within-condition task positive and task negative voxels than across-condition distances for four different sensory, motor and cognitive tasks--implying that deactivation patterns are spatially dependent on activation patterns (and vice versa, and that both are modulated by specific task demands. We also show a similar relationship between positively and negatively correlated networks from a third 'rest' dataset, in the absence of a specific task. We propose that this spatial relationship may be the macroscopic analogue of microscopic neuronal organization reported in sensory cortical systems, and that this organization may reflect homeostatic plasticity necessary for efficient brain function.

  8. A hybridised variable neighbourhood tabu search heuristic to increase security in a utility network

    International Nuclear Information System (INIS)

    Janssens, Jochen; Talarico, Luca; Sörensen, Kenneth

    2016-01-01

    We propose a decision model aimed at increasing security in a utility network (e.g., electricity, gas, water or communication network). The network is modelled as a graph, the edges of which are unreliable. We assume that all edges (e.g., pipes, cables) have a certain, not necessarily equal, probability of failure, which can be reduced by selecting edge-specific security strategies. We develop a mathematical programming model and a metaheuristic approach that uses a greedy random adaptive search procedure to find an initial solution and uses tabu search hybridised with iterated local search and a variable neighbourhood descend heuristic to improve this solution. The main goal is to reduce the risk of service failure between an origin and a destination node by selecting the right combination of security measures for each network edge given a limited security budget. - Highlights: • A decision model aimed at increasing security in a utility network is proposed. • The goal is to reduce the risk of service failure given a limited security budget. • An exact approach and a variable neighbourhood tabu search heuristic are developed. • A generator for realistic networks is built and used to test the solution methods. • The hybridised heuristic reduces the total risk on average with 32%.

  9. Large-Scale Functional Brain Network Reorganization During Taoist Meditation.

    Science.gov (United States)

    Jao, Tun; Li, Chia-Wei; Vértes, Petra E; Wu, Changwei Wesley; Achard, Sophie; Hsieh, Chao-Hsien; Liou, Chien-Hui; Chen, Jyh-Horng; Bullmore, Edward T

    2016-02-01

    Meditation induces a distinct and reversible mental state that provides insights into brain correlates of consciousness. We explored brain network changes related to meditation by graph theoretical analysis of resting-state functional magnetic resonance imaging data. Eighteen Taoist meditators with varying levels of expertise were scanned using a within-subjects counterbalanced design during resting and meditation states. State-related differences in network topology were measured globally and at the level of individual nodes and edges. Although measures of global network topology, such as small-worldness, were unchanged, meditation was characterized by an extensive and expertise-dependent reorganization of the hubs (highly connected nodes) and edges (functional connections). Areas of sensory cortex, especially the bilateral primary visual and auditory cortices, and the bilateral temporopolar areas, which had the highest degree (or connectivity) during the resting state, showed the biggest decrease during meditation. Conversely, bilateral thalamus and components of the default mode network, mainly the bilateral precuneus and posterior cingulate cortex, had low degree in the resting state but increased degree during meditation. Additionally, these changes in nodal degree were accompanied by reorganization of anatomical orientation of the edges. During meditation, long-distance longitudinal (antero-posterior) edges increased proportionally, whereas orthogonal long-distance transverse (right-left) edges connecting bilaterally homologous cortices decreased. Our findings suggest that transient changes in consciousness associated with meditation introduce convergent changes in the topological and spatial properties of brain functional networks, and the anatomical pattern of integration might be as important as the global level of integration when considering the network basis for human consciousness.

  10. Multi-year expansion planning of large transmission networks

    Energy Technology Data Exchange (ETDEWEB)

    Binato, S; Oliveira, G C [Centro de Pesquisas de Energia Eletrica (CEPEL), Rio de Janeiro, RJ (Brazil)

    1994-12-31

    This paper describes a model for multi-year transmission network expansion to be used in long-term system planning. The network is represented by a linearized (DC) power flow and, for each year, operation costs are evaluated by a linear programming (LP) based algorithm that provides sensitivity indices for circuit reinforcements. A Backward/Forward approaches is proposed to devise an expansion plan over the study period. A case study with the southeastern Brazilian system is presented and discussed. (author) 18 refs., 5 figs., 1 tab.

  11. Default network modulation and large-scale network interactivity in healthy young and old adults.

    Science.gov (United States)

    Spreng, R Nathan; Schacter, Daniel L

    2012-11-01

    We investigated age-related changes in default, attention, and control network activity and their interactions in young and old adults. Brain activity during autobiographical and visuospatial planning was assessed using multivariate analysis and with intrinsic connectivity networks as regions of interest. In both groups, autobiographical planning engaged the default network while visuospatial planning engaged the attention network, consistent with a competition between the domains of internalized and externalized cognition. The control network was engaged for both planning tasks. In young subjects, the control network coupled with the default network during autobiographical planning and with the attention network during visuospatial planning. In old subjects, default-to-control network coupling was observed during both planning tasks, and old adults failed to deactivate the default network during visuospatial planning. This failure is not indicative of default network dysfunction per se, evidenced by default network engagement during autobiographical planning. Rather, a failure to modulate the default network in old adults is indicative of a lower degree of flexible network interactivity and reduced dynamic range of network modulation to changing task demands.

  12. Error rate degradation due to switch crosstalk in large modular switched optical networks

    DEFF Research Database (Denmark)

    Saxtoft, Christian; Chidgey, P.

    1993-01-01

    A theoretical model of an optical network incorporating wavelength selective elements, amplifiers, couplers and switches is presented. The model is used to evaluate a large modular switch optical network that provides the capability of adapting easily to changes in network traffic requirements. T....... The network dimensions are shown to be limited by the optical crosstalk in the switch matrices and by the polarization dependent loss in the optical components...

  13. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  14. In-House Communication Support System Based on the Information Propagation Model Utilizes Social Network

    Science.gov (United States)

    Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji

    Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.

  15. The Design of a Large Scale Airline Network

    NARCIS (Netherlands)

    Carmona Benitez, R.B.

    2012-01-01

    Airlines invest a lot of money before opening new pax transportation services, for this reason, airlines have to analyze if their profits will overcome the amount of money they have to invest to open new services. The design and analysis of the feasibility of airlines networks can be done by using

  16. Reverse engineering large-scale genetic networks: synthetic versus

    Indian Academy of Sciences (India)

    Development of microarray technology has resulted in an exponential rise in gene expression data. Linear computational methods are of great assistance in identifying molecular interactions, and elucidating the functional properties of gene networks. It overcomes the weaknesses of in vivo experiments including high cost, ...

  17. An efficient grid layout algorithm for biological networks utilizing various biological attributes

    Directory of Open Access Journals (Sweden)

    Kato Mitsuru

    2007-03-01

    Full Text Available Abstract Background Clearly visualized biopathways provide a great help in understanding biological systems. However, manual drawing of large-scale biopathways is time consuming. We proposed a grid layout algorithm that can handle gene-regulatory networks and signal transduction pathways by considering edge-edge crossing, node-edge crossing, distance measure between nodes, and subcellular localization information from Gene Ontology. Consequently, the layout algorithm succeeded in drastically reducing these crossings in the apoptosis model. However, for larger-scale networks, we encountered three problems: (i the initial layout is often very far from any local optimum because nodes are initially placed at random, (ii from a biological viewpoint, human layouts still exceed automatic layouts in understanding because except subcellular localization, it does not fully utilize biological information of pathways, and (iii it employs a local search strategy in which the neighborhood is obtained by moving one node at each step, and automatic layouts suggest that simultaneous movements of multiple nodes are necessary for better layouts, while such extension may face worsening the time complexity. Results We propose a new grid layout algorithm. To address problem (i, we devised a new force-directed algorithm whose output is suitable as the initial layout. For (ii, we considered that an appropriate alignment of nodes having the same biological attribute is one of the most important factors of the comprehension, and we defined a new score function that gives an advantage to such configurations. For solving problem (iii, we developed a search strategy that considers swapping nodes as well as moving a node, while keeping the order of the time complexity. Though a naïve implementation increases by one order, the time complexity, we solved this difficulty by devising a method that caches differences between scores of a layout and its possible updates

  18. An effective fractal-tree closure model for simulating blood flow in large arterial networks.

    Science.gov (United States)

    Perdikaris, Paris; Grinberg, Leopold; Karniadakis, George Em

    2015-06-01

    The aim of the present work is to address the closure problem for hemodynamic simulations by developing a flexible and effective model that accurately distributes flow in the downstream vasculature and can stably provide a physiological pressure outflow boundary condition. To achieve this goal, we model blood flow in the sub-pixel vasculature by using a non-linear 1D model in self-similar networks of compliant arteries that mimic the structure and hierarchy of vessels in the meso-vascular regime (radii [Formula: see text]). We introduce a variable vessel length-to-radius ratio for small arteries and arterioles, while also addressing non-Newtonian blood rheology and arterial wall viscoelasticity effects in small arteries and arterioles. This methodology aims to overcome substantial cut-off radius sensitivities, typically arising in structured tree and linearized impedance models. The proposed model is not sensitive to outflow boundary conditions applied at the end points of the fractal network, and thus does not require calibration of resistance/capacitance parameters typically required for outflow conditions. The proposed model convergences to a periodic state in two cardiac cycles even when started from zero-flow initial conditions. The resulting fractal-trees typically consist of thousands to millions of arteries, posing the need for efficient parallel algorithms. To this end, we have scaled up a Discontinuous Galerkin solver that utilizes the MPI/OpenMP hybrid programming paradigm to thousands of computer cores, and can simulate blood flow in networks of millions of arterial segments at the rate of one cycle per 5 min. The proposed model has been extensively tested on a large and complex cranial network with 50 parent, patient-specific arteries and 21 outlets to which fractal trees where attached, resulting to a network of up to 4,392,484 vessels in total, and a detailed network of the arm with 276 parent arteries and 103 outlets (a total of 702,188 vessels

  19. A P2P Query Algorithm for Opportunistic Networks Utilizing betweenness Centrality Forwarding

    Directory of Open Access Journals (Sweden)

    Jianwei Niu

    2013-01-01

    Full Text Available With the proliferation of high-end mobile devices that feature wireless interfaces, many promising applications are enabled in opportunistic networks. In contrary to traditional networks, opportunistic networks utilize the mobility of nodes to relay messages in a store-carry-forward paradigm. Thus, the relay process in opportunistic networks faces several practical challenges in terms of delay and delivery rate. In this paper, we propose a novel P2P Query algorithm, namely Betweenness Centrality Forwarding (PQBCF, for opportunistic networking. PQBCF adopts a forwarding metric called Betweenness Centrality (BC, which is borrowed from social network, to quantify the active degree of nodes in the networks. In PQBCF, nodes with a higher BC are preferable to serve as relays, leading to higher query success rate and lower query delay. A comparison with the state-of-the-art algorithms reveals that PQBCF can provide better performance on both the query success Ratio and query delay, and approaches the performance of Epidemic Routing (ER with much less resource consumption.

  20. Equation Chapter 1 Section 1Cross Layer Design for Localization in Large-Scale Underwater Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yuanfeng ZHANG

    2014-02-01

    Full Text Available There are many technical challenges for designing large-scale underwater sensor networks, especially the sensor node localization. Although many papers studied for large-scale sensor node localization, previous studies mainly study the location algorithm without the cross layer design for localization. In this paper, by utilizing the network hierarchical structure of underwater sensor networks, we propose a new large-scale underwater acoustic localization scheme based on cross layer design. In this scheme, localization is performed in a hierarchical way, and the whole localization process focused on the physical layer, data link layer and application layer. We increase the pipeline parameters which matched the acoustic channel, added in MAC protocol to increase the authenticity of the large-scale underwater sensor networks, and made analysis of different location algorithm. We conduct extensive simulations, and our results show that MAC layer protocol and the localization algorithm all would affect the result of localization which can balance the trade-off between localization accuracy, localization coverage, and communication cost.

  1. Distributed processing and network of data acquisition and diagnostics control for Large Helical Device (LHD)

    International Nuclear Information System (INIS)

    Nakanishi, H.; Kojima, M.; Hidekuma, S.

    1997-11-01

    The LHD (Large Helical Device) data processing system has been designed in order to deal with the huge amount of diagnostics data of 600-900 MB per 10-second short-pulse experiment. It prepares the first plasma experiment in March 1998. The recent increase of the data volume obliged to adopt the fully distributed system structure which uses multiple data transfer paths in parallel and separates all of the computer functions into clients and servers. The fundamental element installed for every diagnostic device consists of two kinds of server computers; the data acquisition PC/Windows NT and the real-time diagnostics control VME/VxWorks. To cope with diversified kinds of both device control channels and diagnostics data, the object-oriented method are utilized wholly for the development of this system. It not only reduces the development burden, but also widen the software portability and flexibility. 100Mbps EDDI-based fast networks will re-integrate the distributed server computers so that they can behave as one virtual macro-machine for users. Network methods applied for the LHD data processing system are completely based on the TCP/IP internet technology, and it provides the same accessibility to the remote collaborators as local participants can operate. (author)

  2. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    Science.gov (United States)

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  3. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    Science.gov (United States)

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  4. Information Extraction from Large-Multi-Layer Social Networks

    Science.gov (United States)

    2015-08-06

    mization [4]. Methods that fall into this category include spec- tral algorithms, modularity methods, and methods that rely on statistical inference...Snijders and Chris Baerveldt, “A multilevel network study of the effects of delinquent behavior on friendship evolution,” Journal of mathematical sociol- ogy...1970. [10] Ulrike Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, Dec. 2007. [11] R. A. Fisher, “On

  5. The moderating role of social networks in the relationship between alcohol consumption and treatment utilization for alcohol-related problems

    Science.gov (United States)

    Mowbray, Orion

    2014-01-01

    Many individuals wait until alcohol use becomes severe before treatment is sought. However, social networks, or the number of social groups an individual belongs to, may play a moderating role in this relationship. Logistic regression examined the interaction of alcohol consumption and social networks as a predictor of treatment utilization while adjusting for sociodemographic and clinical variables among 1,433 lifetime alcohol-dependent respondents from wave 2 of the National Epidemiologic Survey on Alcohol Related Conditions (NESARC). Results showed that social networks moderate the relationship between alcohol consumption and treatment utilization such that for individuals with few network ties, the relationship between alcohol consumption and treatment utilization was diminished, compared to the relationship between alcohol consumption and treatment utilization for individuals with many network ties. Findings offer insight into how social networks, at times, can influence individuals to pursue treatment, while at other times, influence individuals to stay out of treatment, or seek treatment substitutes. PMID:24462223

  6. A wireless sensor network design and evaluation for large structural strain field monitoring

    International Nuclear Information System (INIS)

    Qiu, Zixue; Wu, Jian; Yuan, Shenfang

    2011-01-01

    Structural strain changes under external environmental or mechanical loads are the main monitoring parameters in structural health monitoring or mechanical property tests. This paper presents a wireless sensor network designed for monitoring large structural strain field variation. First of all, a precision strain sensor node is designed for multi-channel strain gauge signal conditioning and wireless monitoring. In order to establish a synchronous strain data acquisition network, the cluster-star network synchronization method is designed in detail. To verify the functionality of the designed wireless network for strain field monitoring capability, a multi-point network evaluation system is developed for an experimental aluminum plate structure for load variation monitoring. Based on the precision wireless strain nodes, the wireless data acquisition network is deployed to synchronously gather, process and transmit strain gauge signals and monitor results under concentrated loads. This paper shows the efficiency of the wireless sensor network for large structural strain field monitoring

  7. Distinct enlargement of network size or measurement speed for serial FBG sensor networks utilizing SIK-DS-CDMA

    Energy Technology Data Exchange (ETDEWEB)

    Abbenseth, S; Lochmann, S I [Hochschule Wismar, Univ. of Technology, Business and Design, Dept. of Electrical Engineering and Informatics, Philipp-Mueller-Strasse, 23952, Wismar (Germany)

    2005-01-01

    Using the spectral selectivity and adjustable reflectivity FBGs are predestined for serial networking. Presently the addressing is realised by time division multiplex (TDM) or wavelength division multiplex (WDM). But these technologies have big disadvantages regarding the effective use of the prevailing resources time and wavelength, respectively. In this paper a new scheme capable of addressing a large number of FBGs in a single serial network is proposed and compared to TDM and WDM. Using all optical sequence inversion keyed (SIK) direct sequence (DS) code division multiplex (CDM) it overcomes the restrictions handling the resources time and wavelength without losing accuracy.

  8. Distinct enlargement of network size or measurement speed for serial FBG sensor networks utilizing SIK-DS-CDMA

    International Nuclear Information System (INIS)

    Abbenseth, S; Lochmann, S I

    2005-01-01

    Using the spectral selectivity and adjustable reflectivity FBGs are predestined for serial networking. Presently the addressing is realised by time division multiplex (TDM) or wavelength division multiplex (WDM). But these technologies have big disadvantages regarding the effective use of the prevailing resources time and wavelength, respectively. In this paper a new scheme capable of addressing a large number of FBGs in a single serial network is proposed and compared to TDM and WDM. Using all optical sequence inversion keyed (SIK) direct sequence (DS) code division multiplex (CDM) it overcomes the restrictions handling the resources time and wavelength without losing accuracy

  9. Autonomous management of a recursive area hierarchy for large scale wireless sensor networks using multiple parents

    Energy Technology Data Exchange (ETDEWEB)

    Cree, Johnathan Vee [Washington State Univ., Pullman, WA (United States); Delgado-Frias, Jose [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-03-01

    Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configure the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.

  10. A Topology Visualization Early Warning Distribution Algorithm for Large-Scale Network Security Incidents

    Directory of Open Access Journals (Sweden)

    Hui He

    2013-01-01

    Full Text Available It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system’s emergency response capabilities, alleviate the cyber attacks’ damage, and strengthen the system’s counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system’s plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks’ topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  11. Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks

    Science.gov (United States)

    Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.

    Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.

  12. Long-term fish monitoring in large rivers: Utility of “benchmarking” across basins

    Science.gov (United States)

    Ward, David L.; Casper, Andrew F.; Counihan, Timothy D.; Bayer, Jennifer M.; Waite, Ian R.; Kosovich, John J.; Chapman, Colin; Irwin, Elise R.; Sauer, Jennifer S.; Ickes, Brian; McKerrow, Alexa

    2017-01-01

    In business, benchmarking is a widely used practice of comparing your own business processes to those of other comparable companies and incorporating identified best practices to improve performance. Biologists and resource managers designing and conducting monitoring programs for fish in large river systems tend to focus on single river basins or segments of large rivers, missing opportunities to learn from those conducting fish monitoring in other rivers. We briefly examine five long-term fish monitoring programs in large rivers in the United States (Colorado, Columbia, Mississippi, Illinois, and Tallapoosa rivers) and identify opportunities for learning across programs by detailing best monitoring practices and why these practices were chosen. Although monitoring objectives, methods, and program maturity differ between each river system, examples from these five case studies illustrate the important role that long-term monitoring programs play in interpreting temporal and spatial shifts in fish populations for both established objectives and newly emerging questions. We suggest that deliberate efforts to develop a broader collaborative network through benchmarking will facilitate sharing of ideas and development of more effective monitoring programs.

  13. Evolution of the large Deep Space Network antennas

    Science.gov (United States)

    Imbriale, William A.

    1991-12-01

    The evolution of the largest antenna of the US NASA Deep Space Network (DSN) is described. The design, performance analysis, and measurement techniques, beginning with its initial 64-m operation at S-band (2295 MHz) in 1966 and continuing through the present ka-band (32-GHz) operation at 70 m, is described. Although their diameters and mountings differ, these parabolic antennas all employ a Cassegrainian feed system, and each antenna dish surface is constructed of precision-shaped perforated-aluminum panels that are secured to an open steel framework

  14. Status of Utilizing Social Media Networks in the Teaching-Learning Process at Public Jordanian Universities

    OpenAIRE

    Muneera Abdalkareem Alshdefait; Mohammad . S. Alzboon

    2018-01-01

    This study aimed at finding out the status of utilizing social media networks in the teaching-learning process at public Jordanian Universities. To achieve the goal of the study, the descriptive developmental method was used and a questionnaire was developed, consisting of (35) statements. The questionnaire was checked for its validity and reliability. Then it was distributed to a sample of (382) male and female students from the undergraduate and graduate levels. The study results showed tha...

  15. Thermodynamically based constraints for rate coefficients of large biochemical networks.

    Science.gov (United States)

    Vlad, Marcel O; Ross, John

    2009-01-01

    Wegscheider cyclicity conditions are relationships among the rate coefficients of a complex reaction network, which ensure the compatibility of kinetic equations with the conditions for thermodynamic equilibrium. The detailed balance at equilibrium, that is the equilibration of forward and backward rates for each elementary reaction, leads to compatibility between the conditions of kinetic and thermodynamic equilibrium. Therefore, Wegscheider cyclicity conditions can be derived by eliminating the equilibrium concentrations from the conditions of detailed balance. We develop matrix algebra tools needed to carry out this elimination, reexamine an old derivation of the general form of Wegscheider cyclicity condition, and develop new derivations which lead to more compact and easier-to-use formulas. We derive scaling laws for the nonequilibrium rates of a complex reaction network, which include Wegscheider conditions as a particular case. The scaling laws for the rates are used for clarifying the kinetic and thermodynamic meaning of Wegscheider cyclicity conditions. Finally, we discuss different ways of using Wegscheider cyclicity conditions for kinetic computations in systems biology.

  16. Joint Utility-Based Power Control and Receive Beamforming in Decentralized Wireless Networks

    Directory of Open Access Journals (Sweden)

    Angela Feistel

    2010-01-01

    Full Text Available This paper addresses the problem of joint resource allocation in general wireless networks and its practical implementation aspects. The objective is to allocate transmit powers and receive beamformers to the users in order to maximize a network-wide utility that represents the attained QoS and is a function of the signal-to-interference ratios. This problem is much more intricate than the corresponding QoS-based power control problem. In particular, it is not known which class of utility functions allows for a convex formulation of this problem. In case of perfect synchronization, the joint power and receiver control problem can be reformulated as a power control problem under optimal receivers. Standard gradient projection methods can be applied to solve this problem. However, these algorithms are not applicable in decentralized wireless networks. Therefore, we decompose the problem and propose a convergent alternate optimization that is amenable to distributed implementation. In addition, in real-world networks noisy measurements and estimations occur. Thus, the proposed algorithm has to be investigated in the framework of stochastic approximation. We discuss practical implementation aspects of the proposed stochastic algorithm and investigate its convergence properties by simulations.

  17. Utility of large spot binocular indirect laser delivery for peripheral photocoagulation therapy in children.

    Science.gov (United States)

    Balasubramaniam, Saranya C; Mohney, Brian G; Bang, Genie M; Link, Thomas P; Pulido, Jose S

    2012-09-01

    The purpose of this article is to demonstrate the utility of the large spot size (LSS) setting using a binocular laser indirect delivery system for peripheral ablation in children. One patient with bilateral retinopathy of prematurity received photocoagulation with standard spot size burns placed adjacently to LSS burns. Using a pixel analysis program called Image J on the Retcam picture, the areas of each retinal spot size were determined in units of pixels, giving a standard spot range of 805 to 1294 pixels and LSS range of 1699 to 2311 pixels. Additionally, fluence was calculated using theoretical retinal areas produced by each spot size: the standard spot setting was 462 mJ/mm2 and the LSS setting was 104 mJ/mm2. For eyes with retinopathy of prematurity, our study shows that LSS laser indirect delivery halves the number of spots required for treatment and reduces fluence by almost one-quarter, producing more uniform spots.

  18. Effect of large aspect ratio of biomass particles on carbon burnout in a utility boiler

    Energy Technology Data Exchange (ETDEWEB)

    D. Gera; M.P. Mathur; M.C. Freeman; Allen Robinson [Fluent, Inc./NETL, Morgantown, WV (United States)

    2002-12-01

    This paper reports on the development and validation of comprehensive combustion sub models that include the effect of large aspect ratio of biomass (switchgrass) particles on carbon burnout and temperature distribution inside the particles. Temperature and carbon burnout data are compared from two different models that are formulated by assuming (i) the particles are cylindrical and conduct heat internally, and (ii) the particles are spherical without internal heat conduction, i.e., no temperature gradient exists inside the particle. It was inferred that the latter model significantly underpredicted the temperature of the particle and, consequently, the burnout. Additionally, some results from cofiring biomass (10% heat input) with pulverized coal (90% heat input) are compared with the pulverized coal (100% heat input) simulations and coal experiments in a tangentially fired 150 MW{sub e} utility boiler. 26 refs., 7 figs., 4 tabs.

  19. Quantitative utilization of prior biological knowledge in the Bayesian network modeling of gene expression data

    Directory of Open Access Journals (Sweden)

    Gao Shouguo

    2011-08-01

    Full Text Available Abstract Background Bayesian Network (BN is a powerful approach to reconstructing genetic regulatory networks from gene expression data. However, expression data by itself suffers from high noise and lack of power. Incorporating prior biological knowledge can improve the performance. As each type of prior knowledge on its own may be incomplete or limited by quality issues, integrating multiple sources of prior knowledge to utilize their consensus is desirable. Results We introduce a new method to incorporate the quantitative information from multiple sources of prior knowledge. It first uses the Naïve Bayesian classifier to assess the likelihood of functional linkage between gene pairs based on prior knowledge. In this study we included cocitation in PubMed and schematic similarity in Gene Ontology annotation. A candidate network edge reservoir is then created in which the copy number of each edge is proportional to the estimated likelihood of linkage between the two corresponding genes. In network simulation the Markov Chain Monte Carlo sampling algorithm is adopted, and samples from this reservoir at each iteration to generate new candidate networks. We evaluated the new algorithm using both simulated and real gene expression data including that from a yeast cell cycle and a mouse pancreas development/growth study. Incorporating prior knowledge led to a ~2 fold increase in the number of known transcription regulations recovered, without significant change in false positive rate. In contrast, without the prior knowledge BN modeling is not always better than a random selection, demonstrating the necessity in network modeling to supplement the gene expression data with additional information. Conclusion our new development provides a statistical means to utilize the quantitative information in prior biological knowledge in the BN modeling of gene expression data, which significantly improves the performance.

  20. Survey and analysis of selected jointly owned large-scale electric utility storage projects

    Energy Technology Data Exchange (ETDEWEB)

    1982-05-01

    The objective of this study was to examine and document the issues surrounding the curtailment in commercialization of large-scale electric storage projects. It was sensed that if these issues could be uncovered, then efforts might be directed toward clearing away these barriers and allowing these technologies to penetrate the market to their maximum potential. Joint-ownership of these projects was seen as a possible solution to overcoming the major barriers, particularly economic barriers, of commercializaton. Therefore, discussions with partners involved in four pumped storage projects took place to identify the difficulties and advantages of joint-ownership agreements. The four plants surveyed included Yards Creek (Public Service Electric and Gas and Jersey Central Power and Light); Seneca (Pennsylvania Electric and Cleveland Electric Illuminating Company); Ludington (Consumers Power and Detroit Edison, and Bath County (Virginia Electric Power Company and Allegheny Power System, Inc.). Also investigated were several pumped storage projects which were never completed. These included Blue Ridge (American Electric Power); Cornwall (Consolidated Edison); Davis (Allegheny Power System, Inc.) and Kttatiny Mountain (General Public Utilities). Institutional, regulatory, technical, environmental, economic, and special issues at each project were investgated, and the conclusions relative to each issue are presented. The major barriers preventing the growth of energy storage are the high cost of these systems in times of extremely high cost of capital, diminishing load growth and regulatory influences which will not allow the building of large-scale storage systems due to environmental objections or other reasons. However, the future for energy storage looks viable despite difficult economic times for the utility industry. Joint-ownership can ease some of the economic hardships for utilites which demonstrate a need for energy storage.

  1. Generative Adversarial Networks Based Heterogeneous Data Integration and Its Application for Intelligent Power Distribution and Utilization

    Directory of Open Access Journals (Sweden)

    Yuanpeng Tan

    2018-01-01

    Full Text Available Heterogeneous characteristics of a big data system for intelligent power distribution and utilization have already become more and more prominent, which brings new challenges for the traditional data analysis technologies and restricts the comprehensive management of distribution network assets. In order to solve the problem that heterogeneous data resources of power distribution systems are difficult to be effectively utilized, a novel generative adversarial networks (GANs based heterogeneous data integration method for intelligent power distribution and utilization is proposed. In the proposed method, GANs theory is introduced to expand the distribution of completed data samples. Then, a so-called peak clustering algorithm is proposed to realize the finite open coverage of the expanded sample space, and repair those incomplete samples to eliminate the heterogeneous characteristics. Finally, in order to realize the integration of the heterogeneous data for intelligent power distribution and utilization, the well-trained discriminator model of GANs is employed to check the restored data samples. The simulation experiments verified the validity and stability of the proposed heterogeneous data integration method, which provides a novel perspective for the further data quality management of power distribution systems.

  2. Research on the Prediction Model of CPU Utilization Based on ARIMA-BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wang Jina

    2016-01-01

    Full Text Available The dynamic deployment technology of the virtual machine is one of the current cloud computing research focuses. The traditional methods mainly work after the degradation of the service performance that usually lag. To solve the problem a new prediction model based on the CPU utilization is constructed in this paper. A reference offered by the new prediction model of the CPU utilization is provided to the VM dynamic deployment process which will speed to finish the deployment process before the degradation of the service performance. By this method it not only ensure the quality of services but also improve the server performance and resource utilization. The new prediction method of the CPU utilization based on the ARIMA-BP neural network mainly include four parts: preprocess the collected data, build the predictive model of ARIMA-BP neural network, modify the nonlinear residuals of the time series by the BP prediction algorithm and obtain the prediction results by analyzing the above data comprehensively.

  3. Identifying demand effects in a large network of product categories

    NARCIS (Netherlands)

    Gelper, S.E.C.; Wilms, I.; Croux, C.

    2016-01-01

    Planning marketing mix strategies requires retailers to understand within- as well as cross-category demand effects. Most retailers carry products in a large variety of categories, leading to a high number of such demand effects to be estimated. At the same time, we do not expect cross-category

  4. Risk-based optimization of pipe inspections in large underground networks with imprecise information

    International Nuclear Information System (INIS)

    Mancuso, A.; Compare, M.; Salo, A.; Zio, E.; Laakso, T.

    2016-01-01

    In this paper, we present a novel risk-based methodology for optimizing the inspections of large underground infrastructure networks in the presence of incomplete information about the network features and parameters. The methodology employs Multi Attribute Value Theory to assess the risk of each pipe in the network, whereafter the optimal inspection campaign is built with Portfolio Decision Analysis (PDA). Specifically, Robust Portfolio Modeling (RPM) is employed to identify Pareto-optimal portfolios of pipe inspections. The proposed methodology is illustrated by reporting a real case study on the large-scale maintenance optimization of the sewerage network in Espoo, Finland. - Highlights: • Risk-based approach to optimize pipe inspections on large underground networks. • Reasonable computational effort to select efficient inspection portfolios. • Possibility to accommodate imprecise expert information. • Feasibility of the approach shown by Espoo water system case study.

  5. Integrating large-scale functional genomics data to dissect metabolic networks for hydrogen production

    Energy Technology Data Exchange (ETDEWEB)

    Harwood, Caroline S

    2012-12-17

    The goal of this project is to identify gene networks that are critical for efficient biohydrogen production by leveraging variation in gene content and gene expression in independently isolated Rhodopseudomonas palustris strains. Coexpression methods were applied to large data sets that we have collected to define probabilistic causal gene networks. To our knowledge this a first systems level approach that takes advantage of strain-to strain variability to computationally define networks critical for a particular bacterial phenotypic trait.

  6. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  7. Fabrication of large-scale one-dimensional Au nanochain and nanowire networks by interfacial self-assembly

    International Nuclear Information System (INIS)

    Wang Minhua; Li Yongjun; Xie Zhaoxiong; Liu Cai; Yeung, Edward S.

    2010-01-01

    By utilizing the strong capillary attraction between interfacial nanoparticles, large-scale one-dimensional Au nanochain networks were fabricated at the n-butanol/water interface, and could be conveniently transferred onto hydrophilic substrates. Furthermore, the length of the nanochains could be adjusted simply by controlling the density of Au nanoparticles (AuNPs) at the n-butanol/water interface. Surprisingly, the resultant Au nanochains could further transform into smooth nanowires by increasing the aging time, forming a nanowire network. Combined characterization by HRTEM and UV-vis spectroscopy indicates that the formation of Au nanochains stemmed from a stochastic assembly of interfacial AuNPs due to strong capillary attraction, and the evolution of nanochains to nanowires follows an Ostwald ripening mechanism rather than an oriented attachment. This method could be utilized to fabricate large-area nanochain or nanowire networks more uniformly on solid substrates than that of evaporating a solution of nanochain colloid, since it eliminates the three-dimensional aggregation behavior.

  8. Spectral and Energy Efficiencies in mmWave Cellular Networks for Optimal Utilization

    Directory of Open Access Journals (Sweden)

    Abdulbaset M. Hamed

    2018-01-01

    Full Text Available Millimeter wave (mmWave spectrum has been proposed for use in commercial cellular networks to relieve the already severely congested microwave spectrum. Thus, the design of an efficient mmWave cellular network has gained considerable importance and has to take into account regulations imposed by government agencies with regard to global warming and sustainable development. In this paper, a dense mmWave hexagonal cellular network with each cell consisting of a number of smaller cells with their own Base Stations (BSs is presented as a solution to meet the increasing demand for a variety of high data rate services and growing number of users of cellular networks. Since spectrum and power are critical resources in the design of such a network, a framework is presented that addresses efficient utilization of these resources in mmWave cellular networks in the 28 and 73 GHz bands. These bands are already an integral part of well-known standards such as IEEE 802.15.3c, IEEE 802.11ad, and IEEE 802.16.1. In the analysis, a well-known accurate mmWave channel model for Line of Sight (LOS and Non-Line of Sight (NLOS links is used. The cellular network is analyzed in terms of spectral efficiency, bit/s, energy efficiency, bit/J, area spectral efficiency, bit/s/m2, area energy efficiency, bit/J/m2, and network latency, s/bit. These efficiency metrics are illustrated, using Monte Carlo simulation, as a function of Signal-to-Noise Ratio (SNR, channel model parameters, user distance from BS, and BS transmission power. The efficiency metrics for optimum deployment of cellular networks in 28 and 73 GHz bands are identified. Results show that 73 GHz band achieves better spectrum efficiency and the 28 GHz band is superior in terms of energy efficiency. It is observed that while the latter band is expedient for indoor networks, the former band is appropriate for outdoor networks.

  9. Multi-layer network utilizing rewarded spike time dependent plasticity to learn a foraging task.

    Directory of Open Access Journals (Sweden)

    Pavel Sanda

    2017-09-01

    Full Text Available Neural networks with a single plastic layer employing reward modulated spike time dependent plasticity (STDP are capable of learning simple foraging tasks. Here we demonstrate advanced pattern discrimination and continuous learning in a network of spiking neurons with multiple plastic layers. The network utilized both reward modulated and non-reward modulated STDP and implemented multiple mechanisms for homeostatic regulation of synaptic efficacy, including heterosynaptic plasticity, gain control, output balancing, activity normalization of rewarded STDP and hard limits on synaptic strength. We found that addition of a hidden layer of neurons employing non-rewarded STDP created neurons that responded to the specific combinations of inputs and thus performed basic classification of the input patterns. When combined with a following layer of neurons implementing rewarded STDP, the network was able to learn, despite the absence of labeled training data, discrimination between rewarding patterns and the patterns designated as punishing. Synaptic noise allowed for trial-and-error learning that helped to identify the goal-oriented strategies which were effective in task solving. The study predicts a critical set of properties of the spiking neuronal network with STDP that was sufficient to solve a complex foraging task involving pattern classification and decision making.

  10. Integration and segregation of large-scale brain networks during short-term task automatization.

    Science.gov (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-11-03

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  11. A comparative analysis of the statistical properties of large mobile phone calling networks.

    Science.gov (United States)

    Li, Ming-Xia; Jiang, Zhi-Qiang; Xie, Wen-Jie; Miccichè, Salvatore; Tumminello, Michele; Zhou, Wei-Xing; Mantegna, Rosario N

    2014-05-30

    Mobile phone calling is one of the most widely used communication methods in modern society. The records of calls among mobile phone users provide us a valuable proxy for the understanding of human communication patterns embedded in social networks. Mobile phone users call each other forming a directed calling network. If only reciprocal calls are considered, we obtain an undirected mutual calling network. The preferential communication behavior between two connected users can be statistically tested and it results in two Bonferroni networks with statistically validated edges. We perform a comparative analysis of the statistical properties of these four networks, which are constructed from the calling records of more than nine million individuals in Shanghai over a period of 110 days. We find that these networks share many common structural properties and also exhibit idiosyncratic features when compared with previously studied large mobile calling networks. The empirical findings provide us an intriguing picture of a representative large social network that might shed new lights on the modelling of large social networks.

  12. Output regulation of large-scale hydraulic networks with minimal steady state power consumption

    NARCIS (Netherlands)

    Jensen, Tom Nørgaard; Wisniewski, Rafał; De Persis, Claudio; Kallesøe, Carsten Skovmose

    2014-01-01

    An industrial case study involving a large-scale hydraulic network is examined. The hydraulic network underlies a district heating system, with an arbitrary number of end-users. The problem of output regulation is addressed along with a optimization criterion for the control. The fact that the

  13. 77 FR 58416 - Large Scale Networking (LSN); Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2012-09-20

    ..., Grid, and cloud projects. The MAGIC Team reports to the Large Scale Networking (LSN) Coordinating Group... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD.... Dates/Location: The MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00pm, at...

  14. 78 FR 70076 - Large Scale Networking (LSN)-Middleware and Grid Interagency Coordination (MAGIC) Team

    Science.gov (United States)

    2013-11-22

    ... projects. The MAGIC Team reports to the Large Scale Networking (LSN) Coordinating Group (CG). Public... Coordination (MAGIC) Team AGENCY: The Networking and Information Technology Research and Development (NITRD... MAGIC Team meetings are held on the first Wednesday of each month, 2:00-4:00 p.m., at the National...

  15. A Logically Centralized Approach for Control and Management of Large Computer Networks

    Science.gov (United States)

    Iqbal, Hammad A.

    2012-01-01

    Management of large enterprise and Internet service provider networks is a complex, error-prone, and costly challenge. It is widely accepted that the key contributors to this complexity are the bundling of control and data forwarding in traditional routers and the use of fully distributed protocols for network control. To address these…

  16. Direction of information flow in large-scale resting-state networks is frequency-dependent

    NARCIS (Netherlands)

    Hillebrand, Arjan; Tewarie, Prejaas; Van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A.; Van Straaten, Elisabeth C W; Stam, Cornelis J.

    2016-01-01

    Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these

  17. Utility Evaluation Based on One-To-N Mapping in the Prisoner's Dilemma Game for Interdependent Networks.

    Directory of Open Access Journals (Sweden)

    Juan Wang

    Full Text Available In the field of evolutionary game theory, network reciprocity has become an important means to promote the level of promotion within the population system. Recently, the interdependency provides a novel perspective to understand the widespread cooperation behavior in many real-world systems. In previous works, interdependency is often built from the direct or indirect connections between two networks through the one-to-one mapping mode. However, under many realistic scenarios, players may need much more information from many neighboring agents so as to make a more rational decision. Thus, beyond the one-to-one mapping mode, we investigate the cooperation behavior on two interdependent lattices, in which the utility evaluation of a focal player on one lattice may not only concern himself, but also integrate the payoff information of several corresponding players on the other lattice. Large quantities of simulations indicate that the cooperation can be substantially promoted when compared to the traditionally spatial lattices. The cluster formation and phase transition are also analyzed in order to explore the role of interdependent utility coupling in the collective cooperation. Current results are beneficial to deeply understand various mechanisms to foster the cooperation exhibited inside natural, social and engineering systems.

  18. Investigations on the sensitivity of a stepped-frequency radar utilizing a vector network analyzer for Ground Penetrating Radar

    Science.gov (United States)

    Seyfried, Daniel; Schubert, Karsten; Schoebel, Joerg

    2014-12-01

    Employing a continuous-wave radar system, with the stepped-frequency radar being one type of this class, all reflections from the environment are present continuously and simultaneously at the receiver. Utilizing such a radar system for Ground Penetrating Radar purposes, antenna cross-talk and ground bounce reflection form an overall dominant signal contribution while reflections from objects buried in the ground are of quite weak amplitude due to attenuation in the ground. This requires a large dynamic range of the receiver which in turn requires high sensitivity of the radar system. In this paper we analyze the sensitivity of our vector network analyzer utilized as stepped-frequency radar system for GPR pipe detection. We furthermore investigate the performance of increasing the sensitivity of the radar by means of appropriate averaging and low-noise pre-amplification of the received signal. It turns out that the improvement in sensitivity actually achievable may differ significantly from theoretical expectations. In addition, we give a descriptive explanation why our appropriate experiments demonstrate that the sensitivity of the receiver is independent of the distance between the target object and the source of dominant signal contribution. Finally, our investigations presented in this paper lead to a preferred setting of operation for our vector network analyzer in order to achieve best detection capability for weak reflection amplitudes, hence making the radar system applicable for Ground Penetrating Radar purposes.

  19. Large photonic band gaps and strong attenuations of two-segment-connected Peano derivative networks

    International Nuclear Information System (INIS)

    Lu, Jian; Yang, Xiangbo; Zhang, Guogang; Cai, Lianzhang

    2011-01-01

    In this Letter, based on ancient Peano curves we construct four kinds of interesting Peano derivative networks composed of one-dimensional (1D) waveguides and investigate the optical transmission spectra and photonic attenuation behavior of electromagnetic (EM) waves in one- and two-segment-connected networks. It is found that for some two-segment-connected networks large photonic band gaps (PBGs) can be created and the widths of large PBGs can be controlled by adjusting the matching ratio of waveguide length and are insensitive to generation number. Diamond- and hexagon-Peano networks are good selectable structures for the designing of optical devices with large PBG(s) and strong attenuation(s). -- Highlights: → Peano and Peano derivative networks composed of 1D waveguides are designed. → Large PBGs with strong attenuations have been created by these fractal networks. → New approach for designing optical devices with large PBGs is proposed. → Diamond- and hexagon-Peano networks with d2:d1=2:1 are good selectable structures.

  20. Density Estimation and Anomaly Detection in Large Social Networks

    Science.gov (United States)

    2014-07-15

    Time of Single Trial DMD MD (a) Loss curves for proposed dynamic mirror de - scent (DMD) method and mirror descent (MD) against time for a single...curves for DMD and MD against time over 100 trials. Figure 2.2: Simulation results for the experiment in Section 2.4.1. The vertical dashed lines indicate ...landscape, 2012. http: //strata.oreilly.com/2012/01/what-is-big-data.html. [24] A. Gyorgy, T. Linder , and G. Lugosi. Efficient tracking of large classes

  1. Research on Fault Prediction of Distribution Network Based on Large Data

    Directory of Open Access Journals (Sweden)

    Jinglong Zhou

    2017-01-01

    Full Text Available With the continuous development of information technology and the improvement of distribution automation level. Especially, the amount of on-line monitoring and statistical data is increasing, and large data is used data distribution system, describes the technology to collect, data analysis and data processing of the data distribution system. The artificial neural network mining algorithm and the large data are researched in the fault diagnosis and prediction of the distribution network.

  2. Probability of islanding in utility networks due to grid connected photovoltaic power systems

    Energy Technology Data Exchange (ETDEWEB)

    Verhoeven, B.

    2002-09-15

    This report for the International Energy Agency (IEA) made by Task 5 of the Photovoltaic Power Systems (PVPS) programme takes a look at the probability of islanding in utility networks due to grid-connected photovoltaic power systems. The mission of the Photovoltaic Power Systems Programme is to enhance the international collaboration efforts which accelerate the development and deployment of photovoltaic solar energy. Task 5 deals with issues concerning grid-interconnection and distributed PV power systems. This report summarises the results on a study on the probability of islanding in power networks with a high penetration level of grid connected PV-systems. The results are based on measurements performed during one year in a Dutch utility network. The measurements of active and reactive power were taken every second for two years and stored in a computer for off-line analysis. The area examined and its characteristics are described, as are the test set-up and the equipment used. The ratios between load and PV-power are discussed. The general conclusion is that the probability of islanding is virtually zero for low, medium and high penetration levels of PV-systems.

  3. Power flow modelling in electric networks with renewable energy sources in large areas

    International Nuclear Information System (INIS)

    Buhawa, Z. M.; Dvorsky, E.

    2012-01-01

    In many worlds regions there is a great potential for utilizing home grid connected renewable power generating systems, with capacities of MW thousands. The optimal utilization of these sources is connected with power flow possibilities trough the power network in which they have to be connected. There is necessary to respect the long distances among the electric power sources with great outputs and power consumption and non even distribution of the power sources as well. The article gives the solution possibilities for Libya region under utilization of wind renewable sources in north in shore regions. (Authors)

  4. The key network communication technology in large radiation image cooperative process system

    International Nuclear Information System (INIS)

    Li Zheng; Kang Kejun; Gao Wenhuan; Wang Jingjin

    1998-01-01

    Large container inspection system (LCIS) based on radiation imaging technology is a powerful tool for the customs to check the contents inside a large container without opening it. An image distributed network system is composed of operation manager station, image acquisition station, environment control station, inspection processing station, check-in station, check-out station, database station by using advanced network technology. Mass data, such as container image data, container general information, manifest scanning data, commands and status, must be on-line transferred between different stations. Advanced network communication technology is presented

  5. Application and study of advanced network technology in large container inspection system

    International Nuclear Information System (INIS)

    Li Zheng; Kang Kejun; Gao Wenhuan; Wang Jingjin

    1996-01-01

    Large Container Inspection System (LCIS) based on radiation imaging technology is a powerful tool for the customs to check the contents inside a large container without opening it. An image distributed network system is composed of center manager station, image acquisition station, environment control station, inspection processing station, check-in station, check-out station, database station by using advanced network technology. Mass data, such as container image data, container general information, manifest scanning data, commands and status, must be on-line transferred between different stations. Advanced network technology and software programming technique are presented

  6. A Unified Network Security Architecture for Large, Distributed Networks, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In typical, multi-organizational networking environments, it is difficult to define and maintain a uniform authentication scheme that provides users with easy access...

  7. Multirelational organization of large-scale social networks in an online world.

    Science.gov (United States)

    Szell, Michael; Lambiotte, Renaud; Thurner, Stefan

    2010-08-03

    The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.

  8. Multilevel compression of random walks on networks reveals hierarchical organization in large integrated systems.

    Directory of Open Access Journals (Sweden)

    Martin Rosvall

    Full Text Available To comprehend the hierarchical organization of large integrated systems, we introduce the hierarchical map equation, which reveals multilevel structures in networks. In this information-theoretic approach, we exploit the duality between compression and pattern detection; by compressing a description of a random walker as a proxy for real flow on a network, we find regularities in the network that induce this system-wide flow. Finding the shortest multilevel description of the random walker therefore gives us the best hierarchical clustering of the network--the optimal number of levels and modular partition at each level--with respect to the dynamics on the network. With a novel search algorithm, we extract and illustrate the rich multilevel organization of several large social and biological networks. For example, from the global air traffic network we uncover countries and continents, and from the pattern of scientific communication we reveal more than 100 scientific fields organized in four major disciplines: life sciences, physical sciences, ecology and earth sciences, and social sciences. In general, we find shallow hierarchical structures in globally interconnected systems, such as neural networks, and rich multilevel organizations in systems with highly separated regions, such as road networks.

  9. Delay/Disruption Tolerance Networking (DTN) Implementation and Utilization Options on the International Space Station

    Science.gov (United States)

    Holbrook, Mark; Pitts, Robert Lee; Gifford, Kevin K.; Jenkins, Andrew; Kuzminsky, Sebastian

    2010-01-01

    The International Space Station (ISS) is in an operational configuration and nearing final assembly. With its maturity and diverse payloads onboard, the opportunity exists to extend the orbital lab into a facility to exercise and demonstrate Delay/Disruption Tolerant Networking (DTN). DTN is an end-to-end network service providing communications through environments characterized by intermittent connectivity, variable delays, high bit error rates, asymmetric links and simplex links. The DTN protocols, also known as bundle protocols, provide a store-and-forward capability to accommodate end-to-end network services. Key capabilities of the bundling protocols include: the Ability to cope with intermittent connectivity, the Ability to take advantage of scheduled and opportunistic connectivity (in addition to always up connectivity), Custody Transfer, and end-to-end security. Colorado University at Boulder and the Huntsville Operational Support Center (HOSC) have been developing a DTN capability utilizing the Commercial Generic Bioprocessing Apparatus (CGBA) payload resources onboard the ISS, at the Boulder Payload Operations Center (POC) and at the HOSC. The DTN capability is in parallel with and is designed to augment current capabilities. The architecture consists of DTN endpoint nodes on the ISS and at the Boulder POC, and a DTN node at the HOSC. The DTN network is composed of two implementations; the Interplanetary Overlay Network (ION) and the open source DTN2 implementation. This paper presents the architecture, implementation, and lessons learned. By being able to handle the types of environments described above, the DTN technology will be instrumental in extending networks into deep space to support future missions to other planets and other solar system points of interest. Thus, this paper also discusses how this technology will be applicable to these types of deep space exploration missions.

  10. GPP Webinar: Solar Utilization in Higher Education Networking & Information Sharing Group: RFP, Contract, and Administrative Issues Discussion

    Science.gov (United States)

    This presentation from a Solar Utilization in Higher Education Networking and Information webinar covers contracts, Request for Proposals (RFPs), and administrative issues related to solar project development in the higher education sector.

  11. Protein complex prediction in large ontology attributed protein-protein interaction networks.

    Science.gov (United States)

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Li, Yanpeng; Xu, Bo

    2013-01-01

    Protein complexes are important for unraveling the secrets of cellular organization and function. Many computational approaches have been developed to predict protein complexes in protein-protein interaction (PPI) networks. However, most existing approaches focus mainly on the topological structure of PPI networks, and largely ignore the gene ontology (GO) annotation information. In this paper, we constructed ontology attributed PPI networks with PPI data and GO resource. After constructing ontology attributed networks, we proposed a novel approach called CSO (clustering based on network structure and ontology attribute similarity). Structural information and GO attribute information are complementary in ontology attributed networks. CSO can effectively take advantage of the correlation between frequent GO annotation sets and the dense subgraph for protein complex prediction. Our proposed CSO approach was applied to four different yeast PPI data sets and predicted many well-known protein complexes. The experimental results showed that CSO was valuable in predicting protein complexes and achieved state-of-the-art performance.

  12. A Steam Utility Network Model for the Evaluation of Heat Integration Retrofits – A Case Study of an Oil Refinery

    Directory of Open Access Journals (Sweden)

    Sofie Marton

    2017-12-01

    Full Text Available This paper presents a real industrial example in which the steam utility network of a refinery is modelled in order to evaluate potential Heat Integration retrofits proposed for the site. A refinery, typically, has flexibility to optimize the operating strategy for the steam system depending on the operation of the main processes. This paper presents a few examples of Heat Integration retrofit measures from a case study of a large oil refinery. In order to evaluate expected changes in fuel and electricity imports to the refinery after implementation of the proposed retrofits, a steam system model has been developed. The steam system model has been tested and validated with steady state data from three different operating scenarios and can be used to evaluate how changes to steam balances at different pressure levels would affect overall steam balances, generation of shaft power in turbines, and the consumption of fuel gas.

  13. Maximal planar networks with large clustering coefficient and power-law degree distribution

    International Nuclear Information System (INIS)

    Zhou Tao; Yan Gang; Wang Binghong

    2005-01-01

    In this article, we propose a simple rule that generates scale-free networks with very large clustering coefficient and very small average distance. These networks are called random Apollonian networks (RANs) as they can be considered as a variation of Apollonian networks. We obtain the analytic results of power-law exponent γ=3 and clustering coefficient C=(46/3)-36 ln (3/2)≅0.74, which agree with the simulation results very well. We prove that the increasing tendency of average distance of RANs is a little slower than the logarithm of the number of nodes in RANs. Since most real-life networks are both scale-free and small-world networks, RANs may perform well in mimicking the reality. The RANs possess hierarchical structure as C(k)∼k -1 that are in accord with the observations of many real-life networks. In addition, we prove that RANs are maximal planar networks, which are of particular practicability for layout of printed circuits and so on. The percolation and epidemic spreading process are also studied and the comparisons between RANs and Barabasi-Albert (BA) as well as Newman-Watts (NW) networks are shown. We find that, when the network order N (the total number of nodes) is relatively small (as N∼10 4 ), the performance of RANs under intentional attack is not sensitive to N, while that of BA networks is much affected by N. And the diseases spread slower in RANs than BA networks in the early stage of the suseptible-infected process, indicating that the large clustering coefficient may slow the spreading velocity, especially in the outbreaks

  14. A visual analytics system for optimizing the performance of large-scale networks in supercomputing systems

    Directory of Open Access Journals (Sweden)

    Takanori Fujiwara

    2018-03-01

    Full Text Available The overall efficiency of an extreme-scale supercomputer largely relies on the performance of its network interconnects. Several of the state of the art supercomputers use networks based on the increasingly popular Dragonfly topology. It is crucial to study the behavior and performance of different parallel applications running on Dragonfly networks in order to make optimal system configurations and design choices, such as job scheduling and routing strategies. However, in order to study these temporal network behavior, we would need a tool to analyze and correlate numerous sets of multivariate time-series data collected from the Dragonfly’s multi-level hierarchies. This paper presents such a tool–a visual analytics system–that uses the Dragonfly network to investigate the temporal behavior and optimize the communication performance of a supercomputer. We coupled interactive visualization with time-series analysis methods to help reveal hidden patterns in the network behavior with respect to different parallel applications and system configurations. Our system also provides multiple coordinated views for connecting behaviors observed at different levels of the network hierarchies, which effectively helps visual analysis tasks. We demonstrate the effectiveness of the system with a set of case studies. Our system and findings can not only help improve the communication performance of supercomputing applications, but also the network performance of next-generation supercomputers. Keywords: Supercomputing, Parallel communication network, Dragonfly networks, Time-series data, Performance analysis, Visual analytics

  15. Utility of temporary aftershock warning system in the immediate aftermath of large damaging earthquakes

    International Nuclear Information System (INIS)

    Harben, P.E.; Jarpe, S.P.; Hunter, S.; Johnston, C.A.

    1993-01-01

    An aftershock warning system (AWS) is a real-time warning system that is deployed immediately after a large damaging earthquake in the epicentral region of the main shock. The primary purpose of such a system is to warn rescue teams and workers within damaged structures of imminent destructive shaking. The authors have examined the utility of such a system (1) by evaluating historical data, and (2) by developing and testing a prototype system during the 1992 Landers, California, aftershock sequence. Analyzing historical data is important in determining when and where damaging aftershocks are likely to occur and the probable usefulness of an AWS in a particular region. As part of this study, they analyzed the spatial and temporal distribution of large (magnitude >5.0) aftershocks from earthquakes with magnitudes >6.0 that took place between 1942 and 1991 in California and Nevada. They found that one-quarter of these large aftershocks occurred from 2 days-2 months after the main event, nearly one-half occurred within the first two days of the main event, and greater than one-half occurred within 20 km of the main shock's epicenter. They also reviewed a case study of the 1985 Mexico City earthquake, which showed that an AWS could have given Mexico City a warning of ∼60 sec before the magnitude 7.6 aftershock that occurred 36 hr. after the main event. They deployed a four-station prototype AWS near Landers after a magnitude 7.4 earthquake occurred on June 28, 1992. The aftershock data, collected from July 3-10, showed that the aftershocks in the vicinity of the four stations varied in magnitude from 3.0-4.4. Using a two-station detection criterion to minimize false alarms, this AWS reliably discriminated between smaller and larger aftershocks within 3 sec of the origin time of the events. This prototype could have provided 6 sec of warning to Palm Springs and 20 sec of warning to San Bernardino of aftershocks occurring in the main-shock epicentral region

  16. Sum Utilization of Spectrum with Spectrum Handoff and Imperfect Sensing in Interweave Multi-Channel Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Waqas Khalid

    2018-05-01

    Full Text Available Fifth-generation (5G heterogeneous network deployment poses new challenges for 5G-based cognitive radio networks (5G-CRNs as the primary user (PU is required to be more active because of the small cells, random user arrival, and spectrum handoff. Interweave CRNs (I-CRNs improve spectrum utilization by allowing opportunistic spectrum access (OSA for secondary users (SUs. The sum utilization of spectrum, i.e., joint utilization of spectrum by the SU and PU, depends on the spatial and temporal variations of PU activities, sensing outcomes, transmitting conditions, and spectrum handoff. In this study, we formulate and analyze the sum utilization of spectrum with different sets of channels under different PU and SU co-existing network topologies. We consider realistic multi-channel scenarios for the SU, with each channel licensed to a PU. The SU, aided by spectrum handoff, is authorized to utilize the channels on the basis of sensing outcomes and PU interruptions. The numerical evaluation of the proposed work is presented under different network and sensing parameters. Moreover, the sum utilization gain is investigated to analyze the sensitivities of different sensing parameters. It is demonstrated that different sets of channels, PU activities, and sensing outcomes have a significant impact on the sum utilization of spectrum associated with a specific network topology.

  17. Simulation of emergency response operations for a static chemical spill within a building using an opportunistic resource utilization network

    NARCIS (Netherlands)

    Lilien, L.T.; Elbes, M.W.; Ben Othmane, L.; Salih, R.M.

    2013-01-01

    We investigate supporting emergency response operations with opportunistic resource utilization networks ("oppnets"), based on a network paradigm for inviting and integrating diverse devices and systems available in the environment. We simulate chemical spill on a single floor of a building and

  18. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities.

    Science.gov (United States)

    Santangelo, Valerio

    2018-01-01

    Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010) to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory) in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory) in one spatial location. The analysis of the independent components (ICs) revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS) and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF) and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC). The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among brain networks

  19. Large-Scale Brain Networks Supporting Divided Attention across Spatial Locations and Sensory Modalities

    Directory of Open Access Journals (Sweden)

    Valerio Santangelo

    2018-02-01

    Full Text Available Higher-order cognitive processes were shown to rely on the interplay between large-scale neural networks. However, brain networks involved with the capability to split attentional resource over multiple spatial locations and multiple stimuli or sensory modalities have been largely unexplored to date. Here I re-analyzed data from Santangelo et al. (2010 to explore the causal interactions between large-scale brain networks during divided attention. During fMRI scanning, participants monitored streams of visual and/or auditory stimuli in one or two spatial locations for detection of occasional targets. This design allowed comparing a condition in which participants monitored one stimulus/modality (either visual or auditory in two spatial locations vs. a condition in which participants monitored two stimuli/modalities (both visual and auditory in one spatial location. The analysis of the independent components (ICs revealed that dividing attentional resources across two spatial locations necessitated a brain network involving the left ventro- and dorso-lateral prefrontal cortex plus the posterior parietal cortex, including the intraparietal sulcus (IPS and the angular gyrus, bilaterally. The analysis of Granger causality highlighted that the activity of lateral prefrontal regions were predictive of the activity of all of the posteriors parietal nodes. By contrast, dividing attention across two sensory modalities necessitated a brain network including nodes belonging to the dorsal frontoparietal network, i.e., the bilateral frontal eye-fields (FEF and IPS, plus nodes belonging to the salience network, i.e., the anterior cingulated cortex and the left and right anterior insular cortex (aIC. The analysis of Granger causality highlights a tight interdependence between the dorsal frontoparietal and salience nodes in trials requiring divided attention between different sensory modalities. The current findings therefore highlighted a dissociation among

  20. Base Station Placement Algorithm for Large-Scale LTE Heterogeneous Networks.

    Science.gov (United States)

    Lee, Seungseob; Lee, SuKyoung; Kim, Kyungsoo; Kim, Yoon Hyuk

    2015-01-01

    Data traffic demands in cellular networks today are increasing at an exponential rate, giving rise to the development of heterogeneous networks (HetNets), in which small cells complement traditional macro cells by extending coverage to indoor areas. However, the deployment of small cells as parts of HetNets creates a key challenge for operators' careful network planning. In particular, massive and unplanned deployment of base stations can cause high interference, resulting in highly degrading network performance. Although different mathematical modeling and optimization methods have been used to approach various problems related to this issue, most traditional network planning models are ill-equipped to deal with HetNet-specific characteristics due to their focus on classical cellular network designs. Furthermore, increased wireless data demands have driven mobile operators to roll out large-scale networks of small long term evolution (LTE) cells. Therefore, in this paper, we aim to derive an optimum network planning algorithm for large-scale LTE HetNets. Recently, attempts have been made to apply evolutionary algorithms (EAs) to the field of radio network planning, since they are characterized as global optimization methods. Yet, EA performance often deteriorates rapidly with the growth of search space dimensionality. To overcome this limitation when designing optimum network deployments for large-scale LTE HetNets, we attempt to decompose the problem and tackle its subcomponents individually. Particularly noting that some HetNet cells have strong correlations due to inter-cell interference, we propose a correlation grouping approach in which cells are grouped together according to their mutual interference. Both the simulation and analytical results indicate that the proposed solution outperforms the random-grouping based EA as well as an EA that detects interacting variables by monitoring the changes in the objective function algorithm in terms of system

  1. Large-scale functional networks connect differently for processing words and symbol strings.

    Science.gov (United States)

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  2. Expectation propagation for large scale Bayesian inference of non-linear molecular networks from perturbation data.

    Science.gov (United States)

    Narimani, Zahra; Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger

    2017-01-01

    Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods.

  3. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    Science.gov (United States)

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  4. Localization Algorithm Based on a Spring Model (LASM for Large Scale Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Shuai Li

    2008-03-01

    Full Text Available A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1 for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  5. Impact of large-scale energy efficiency programs on utility finances and consumer tariffs in India

    International Nuclear Information System (INIS)

    Abhyankar, Nikit; Phadke, Amol

    2012-01-01

    The objective of this paper is to analyze the effect on utility finances and consumer tariffs of implementing utility-funded demand-side energy efficiency (EE) programs in India. We use the state of Delhi as a case study. We estimate that by 2015, the electric utilities in Delhi can potentially save nearly 14% of total sales. We examine the impacts on utility finances and consumer tariffs by developing scenarios that account for variations in the following factors: (a) incentive mechanisms for mitigating the financial risk of utilities, (b) whether utilities fund the EE programs only partially, (c) whether utilities sell the conserved electricity into spot markets and (d) the level of power shortages utilities are facing. We find that average consumer tariff would increase by 2.2% although consumers participating in EE programs benefit from reduction in their electricity consumption. While utility incentive mechanisms can mitigate utilities’ risk of losing long-run returns, they cannot address the risk of consistently negative cash flow. In case of power shortages, the cash flow risk is amplified (reaching up to 57% of utilities annual returns) and is very sensitive to marginal tariffs of consumers facing power shortages. We conclude by proposing solutions to mitigate utility risks. - Highlights: ► We model implementation of energy efficiency (EE) programs in Delhi, India. ► We examine the impact on utility finances and consumer tariffs from 2012 to 2015. ► We find that average consumer tariffs increase but participating consumers benefit. ► Existing regulatory mechanisms cannot address utilities’ risk of negative cash flow. ► Frequent true-ups or ex-ante revenue adjustment is required to address such risk.

  6. On the network protocol performance evaluation for large scale communication system of nuclear plant

    International Nuclear Information System (INIS)

    Song, K. S.; Lee, T. H.; Kim, H. R.; Kim, D. H.; Ku, I. S.

    1998-01-01

    Computer technology has been dramatically advanced and it is now natural to apply digital network technology into nuclear plants. Communication architecture for nuclear plant defines the coordination of safety reactor control, balance of plant, subsystem utilities, and plant monitoring functions, and how they are connected and their user interface to guarantee plant performance and guarantee safety requirements. Therefore, to implement a digital network for control and monitoring systems of advanced nuclear plant needs systematic design and evaluation procedures because of responsive and hard real-time process characteristics of nuclear plant. In this paper, we evaluate several digital network protocols in terms of network delay, link failure effects to hard real-time requirements with full scale traffic

  7. Network Dynamics: Modeling And Generation Of Very Large Heterogeneous Social Networks

    Science.gov (United States)

    2015-11-23

    P11035 (2014). [19] P. L. Krapivsky and S. Redner, Phys. Rev. E. 71, 036118 (2005). [20] M. O. Jackson and B. W. Rogers, Amer. Econ . Rev. 97, 890...P06004 (2010). [24] M. E. J. Newman, Networks: An Introduction (Oxford Univ. Press, Oxford, 2010). [25] P. J. Flory, Principles of Polymer Chemistry

  8. Addressing practical challenges in utility optimization of mobile wireless sensor networks

    Science.gov (United States)

    Eswaran, Sharanya; Misra, Archan; La Porta, Thomas; Leung, Kin

    2008-04-01

    This paper examines the practical challenges in the application of the distributed network utility maximization (NUM) framework to the problem of resource allocation and sensor device adaptation in a mission-centric wireless sensor network (WSN) environment. By providing rich (multi-modal), real-time information about a variety of (often inaccessible or hostile) operating environments, sensors such as video, acoustic and short-aperture radar enhance the situational awareness of many battlefield missions. Prior work on the applicability of the NUM framework to mission-centric WSNs has focused on tackling the challenges introduced by i) the definition of an individual mission's utility as a collective function of multiple sensor flows and ii) the dissemination of an individual sensor's data via a multicast tree to multiple consuming missions. However, the practical application and performance of this framework is influenced by several parameters internal to the framework and also by implementation-specific decisions. This is made further complex due to mobile nodes. In this paper, we use discrete-event simulations to study the effects of these parameters on the performance of the protocol in terms of speed of convergence, packet loss, and signaling overhead thereby addressing the challenges posed by wireless interference and node mobility in ad-hoc battlefield scenarios. This study provides better understanding of the issues involved in the practical adaptation of the NUM framework. It also helps identify potential avenues of improvement within the framework and protocol.

  9. UTILIZING TYPE Ia SUPERNOVAE IN A LARGE, FAST, IMAGING SURVEY TO CONSTRAIN DARK ENERGY

    International Nuclear Information System (INIS)

    Zentner, Andrew R.; Bhattacharya, Suman

    2009-01-01

    We study the utility of a large sample of Type Ia supernovae (SNe Ia) that might be observed in an imaging survey that rapidly scans a large fraction of the sky for constraining dark energy. We consider both the information contained in the traditional luminosity distance test as well as the spread in Ia SN fluxes at fixed redshift induced by gravitational lensing. As would be required from an imaging survey, we include a treatment of photometric redshift uncertainties in our analysis. Our primary result is that the information contained in the mean distance moduli of SNe Ia and the dispersion of SN Ia distance moduli complement each other, breaking a degeneracy between the present dark energy equation of state and its time variation without the need for a high-redshift (z ∼> 0.8) SN sample. Including lensing information also allows for some internal calibration of photometric redshifts. To address photometric redshift uncertainties, we present dark energy constraints as a function of the size of an external set of spectroscopically observed SNe that may be used for redshift calibration, N spec . Depending upon the details of potentially available, external SN data sets, we find that an imaging survey can constrain the dark energy equation of state at the epoch where it is best constrained w p , with a 1σ error of σ(w p ) ∼ 0.03-0.09. In addition, the marginal improvement in the error σ(w p ) from an increase in the spectroscopic calibration sample drops once N spec ∼ a few x 10 3 . This result is important because it is of the order of the size of calibration samples likely to be compiled in the coming decade and because, for samples of this size, the spectroscopic and imaging surveys individually place comparable constraints on the dark energy equation of state. In all cases, it is best to calibrate photometric redshifts with a set of spectroscopically observed SNe with relatively more objects at high redshift (z ∼> 0.5) than the parent sample of

  10. Complexity analysis on public transport networks of 97 large- and medium-sized cities in China

    Science.gov (United States)

    Tian, Zhanwei; Zhang, Zhuo; Wang, Hongfei; Ma, Li

    2018-04-01

    The traffic situation in Chinese urban areas is continuing to deteriorate. To make a better planning and designing of the public transport system, it is necessary to make profound research on the structure of urban public transport networks (PTNs). We investigate 97 large- and medium-sized cities’ PTNs in China, construct three types of network models — bus stop network, bus transit network and bus line network, then analyze the structural characteristics of them. It is revealed that bus stop network is small-world and scale-free, bus transit network and bus line network are both small-world. Betweenness centrality of each city’s PTN shows similar distribution pattern, although these networks’ size is various. When classifying cities according to the characteristics of PTNs or economic development level, the results are similar. It means that the development of cities’ economy and transport network has a strong correlation, PTN expands in a certain model with the development of economy.

  11. An energy-efficient data gathering protocol in large wireless sensor network

    Science.gov (United States)

    Wang, Yamin; Zhang, Ruihua; Tao, Shizhong

    2006-11-01

    Wireless sensor network consisting of a large number of small sensors with low-power transceiver can be an effective tool for gathering data in a variety of environment. The collected data must be transmitted to the base station for further processing. Since a network consists of sensors with limited battery energy, the method for data gathering and routing must be energy efficient in order to prolong the lifetime of the network. In this paper, we presented an energy-efficient data gathering protocol in wireless sensor network. The new protocol used data fusion technology clusters nodes into groups and builds a chain among the cluster heads according to a hybrid of the residual energy and distance to the base station. Results in stochastic geometry are used to derive the optimum parameter of our algorithm that minimizes the total energy spent in the network. Simulation results show performance superiority of the new protocol.

  12. Scalable and Fully Distributed Localization in Large-Scale Sensor Networks

    Directory of Open Access Journals (Sweden)

    Miao Jin

    2017-06-01

    Full Text Available This work proposes a novel connectivity-based localization algorithm, well suitable for large-scale sensor networks with complex shapes and a non-uniform nodal distribution. In contrast to current state-of-the-art connectivity-based localization methods, the proposed algorithm is highly scalable with linear computation and communication costs with respect to the size of the network; and fully distributed where each node only needs the information of its neighbors without cumbersome partitioning and merging process. The algorithm is theoretically guaranteed and numerically stable. Moreover, the algorithm can be readily extended to the localization of networks with a one-hop transmission range distance measurement, and the propagation of the measurement error at one sensor node is limited within a small area of the network around the node. Extensive simulations and comparison with other methods under various representative network settings are carried out, showing the superior performance of the proposed algorithm.

  13. Emergence of switch-like behavior in a large family of simple biochemical networks.

    Directory of Open Access Journals (Sweden)

    Dan Siegal-Gaskins

    2011-05-01

    Full Text Available Bistability plays a central role in the gene regulatory networks (GRNs controlling many essential biological functions, including cellular differentiation and cell cycle control. However, establishing the network topologies that can exhibit bistability remains a challenge, in part due to the exceedingly large variety of GRNs that exist for even a small number of components. We begin to address this problem by employing chemical reaction network theory in a comprehensive in silico survey to determine the capacity for bistability of more than 40,000 simple networks that can be formed by two transcription factor-coding genes and their associated proteins (assuming only the most elementary biochemical processes. We find that there exist reaction rate constants leading to bistability in ∼90% of these GRN models, including several circuits that do not contain any of the TF cooperativity commonly associated with bistable systems, and the majority of which could only be identified as bistable through an original subnetwork-based analysis. A topological sorting of the two-gene family of networks based on the presence or absence of biochemical reactions reveals eleven minimal bistable networks (i.e., bistable networks that do not contain within them a smaller bistable subnetwork. The large number of previously unknown bistable network topologies suggests that the capacity for switch-like behavior in GRNs arises with relative ease and is not easily lost through network evolution. To highlight the relevance of the systematic application of CRNT to bistable network identification in real biological systems, we integrated publicly available protein-protein interaction, protein-DNA interaction, and gene expression data from Saccharomyces cerevisiae, and identified several GRNs predicted to behave in a bistable fashion.

  14. Patterns of interactions of a large fish-parasite network in a tropical floodplain.

    Science.gov (United States)

    Lima, Dilermando P; Giacomini, Henrique C; Takemoto, Ricardo M; Agostinho, Angelo A; Bini, Luis M

    2012-07-01

    1. Describing and explaining the structure of species interaction networks is of paramount importance for community ecology. Yet much has to be learned about the mechanisms responsible for major patterns, such as nestedness and modularity in different kinds of systems, of which large and diverse networks are a still underrepresented and scarcely studied fraction. 2. We assembled information on fishes and their parasites living in a large floodplain of key ecological importance for freshwater ecosystems in the Paraná River basin in South America. The resulting fish-parasite network containing 72 and 324 species of fishes and parasites, respectively, was analysed to investigate the patterns of nestedness and modularity as related to fish and parasite features. 3. Nestedness was found in the entire network and among endoparasites, multiple-host life cycle parasites and native hosts, but not in networks of ectoparasites, single-host life cycle parasites and non-native fishes. All networks were significantly modular. Taxonomy was the major host's attribute influencing both nestedness and modularity: more closely related host species tended to be associated with more nested parasite compositions and had greater chance of belonging to the same network module. Nevertheless, host abundance had a positive relationship with nestedness when only native host species pairs of the same network module were considered for analysis. 4. These results highlight the importance of evolutionary history of hosts in linking patterns of nestedness and formation of modules in the network. They also show that functional attributes of parasites (i.e. parasitism mode and life cycle) and origin of host populations (i.e. natives versus non-natives) are crucial to define the relative contribution of these two network properties and their dependence on other ecological factors (e.g. host abundance), with potential implications for community dynamics and stability. © 2012 The Authors

  15. Major technological innovations introduced in the large antennas of the Deep Space Network

    Science.gov (United States)

    Imbriale, W. A.

    2002-01-01

    The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.

  16. Large-scale network dynamics of beta-band oscillations underlie auditory perceptual decision-making

    Directory of Open Access Journals (Sweden)

    Mohsen Alavash

    2017-06-01

    Full Text Available Perceptual decisions vary in the speed at which we make them. Evidence suggests that translating sensory information into perceptual decisions relies on distributed interacting neural populations, with decision speed hinging on power modulations of the neural oscillations. Yet the dependence of perceptual decisions on the large-scale network organization of coupled neural oscillations has remained elusive. We measured magnetoencephalographic signals in human listeners who judged acoustic stimuli composed of carefully titrated clouds of tone sweeps. These stimuli were used in two task contexts, in which the participants judged the overall pitch or direction of the tone sweeps. We traced the large-scale network dynamics of the source-projected neural oscillations on a trial-by-trial basis using power-envelope correlations and graph-theoretical network discovery. In both tasks, faster decisions were predicted by higher segregation and lower integration of coupled beta-band (∼16–28 Hz oscillations. We also uncovered the brain network states that promoted faster decisions in either lower-order auditory or higher-order control brain areas. Specifically, decision speed in judging the tone sweep direction critically relied on the nodal network configurations of anterior temporal, cingulate, and middle frontal cortices. Our findings suggest that global network communication during perceptual decision-making is implemented in the human brain by large-scale couplings between beta-band neural oscillations. The speed at which we make perceptual decisions varies. This translation of sensory information into perceptual decisions hinges on dynamic changes in neural oscillatory activity. However, the large-scale neural-network embodiment supporting perceptual decision-making is unclear. We addressed this question by experimenting two auditory perceptual decision-making situations. Using graph-theoretical network discovery, we traced the large-scale network

  17. A new type of intelligent wireless sensing network for health monitoring of large-size structures

    Science.gov (United States)

    Lei, Ying; Liu, Ch.; Wu, D. T.; Tang, Y. L.; Wang, J. X.; Wu, L. J.; Jiang, X. D.

    2009-07-01

    In recent years, some innovative wireless sensing systems have been proposed. However, more exploration and research on wireless sensing systems are required before wireless systems can substitute for the traditional wire-based systems. In this paper, a new type of intelligent wireless sensing network is proposed for the heath monitoring of large-size structures. Hardware design of the new wireless sensing units is first studied. The wireless sensing unit mainly consists of functional modules of: sensing interface, signal conditioning, signal digitization, computational core, wireless communication and battery management. Then, software architecture of the unit is introduced. The sensing network has a two-level cluster-tree architecture with Zigbee communication protocol. Important issues such as power saving and fault tolerance are considered in the designs of the new wireless sensing units and sensing network. Each cluster head in the network is characterized by its computational capabilities that can be used to implement the computational methodologies of structural health monitoring; making the wireless sensing units and sensing network have "intelligent" characteristics. Primary tests on the measurement data collected by the wireless system are performed. The distributed computational capacity of the intelligent sensing network is also demonstrated. It is shown that the new type of intelligent wireless sensing network provides an efficient tool for structural health monitoring of large-size structures.

  18. Simulation-Optimization Framework for Synthesis and Design of Natural Gas Downstream Utilization Networks

    Directory of Open Access Journals (Sweden)

    Saad A. Al-Sobhi

    2018-02-01

    Full Text Available Many potential diversification and conversion options are available for utilization of natural gas resources, and several design configurations and technology choices exist for conversion of natural gas to value-added products. Therefore, a detailed mathematical model is desirable for selection of optimal configuration and operating mode among the various options available. In this study, we present a simulation-optimization framework for the optimal selection of economic and environmentally sustainable pathways for natural gas downstream utilization networks by optimizing process design and operational decisions. The main processes (e.g., LNG, GTL, and methanol production, along with different design alternatives in terms of flow-sheeting for each main processing unit (namely syngas preparation, liquefaction, N2 rejection, hydrogen, FT synthesis, methanol synthesis, FT upgrade, and methanol upgrade units, are used for superstructure development. These processes are simulated using ASPEN Plus V7.3 to determine the yields of different processing units under various operating modes. The model has been applied to maximize total profit of the natural gas utilization system with penalties for environmental impact, represented by CO2eq emission obtained using ASPEN Plus for each flowsheet configuration and operating mode options. The performance of the proposed modeling framework is demonstrated using a case study.

  19. Sensing across large-scale cognitive radio networks: Data processing, algorithms, and testbed for wireless tomography and moving target tracking

    Science.gov (United States)

    Bonior, Jason David

    As the use of wireless devices has become more widespread so has the potential for utilizing wireless networks for remote sensing applications. Regular wireless communication devices are not typically designed for remote sensing. Remote sensing techniques must be carefully tailored to the capabilities of these networks before they can be applied. Experimental verification of these techniques and algorithms requires robust yet flexible testbeds. In this dissertation, two experimental testbeds for the advancement of research into sensing across large-scale cognitive radio networks are presented. System architectures, implementations, capabilities, experimental verification, and performance are discussed. One testbed is designed for the collection of scattering data to be used in RF and wireless tomography research. This system is used to collect full complex scattering data using a vector network analyzer (VNA) and amplitude-only data using non-synchronous software-defined radios (SDRs). Collected data is used to experimentally validate a technique for phase reconstruction using semidefinite relaxation and demonstrate the feasibility of wireless tomography. The second testbed is a SDR network for the collection of experimental data. The development of tools for network maintenance and data collection is presented and discussed. A novel recursive weighted centroid algorithm for device-free target localization using the variance of received signal strength for wireless links is proposed. The signal variance resulting from a moving target is modeled as having contours related to Cassini ovals. This model is used to formulate recursive weights which reduce the influence of wireless links that are farther from the target location estimate. The algorithm and its implementation on this testbed are presented and experimental results discussed.

  20. A Matrix-Based Proactive Data Relay Algorithm for Large Distributed Sensor Networks.

    Science.gov (United States)

    Xu, Yang; Hu, Xuemei; Hu, Haixiao; Liu, Ming

    2016-08-16

    In large-scale distributed sensor networks, sensed data is required to be relayed around the network so that one or few sensors can gather adequate relative data to produce high quality information for decision-making. In regards to very high energy-constraint sensor nodes, data transmission should be extremely economical. However, traditional data delivery protocols are potentially inefficient relaying unpredictable sensor readings for data fusion in large distributed networks for either overwhelming query transmissions or unnecessary data coverage. By building sensors' local model from their previously transmitted data in three matrixes, we have developed a novel energy-saving data relay algorithm, which allows sensors to proactively make broadcast decisions by using a neat matrix computation to provide balance between transmission and energy-saving. In addition, we designed a heuristic maintenance algorithm to efficiently update these three matrices. This can easily be deployed to large-scale mobile networks in which decisions of sensors are based on their local matrix models no matter how large the network is, and the local models of these sensors are updated constantly. Compared with some traditional approaches based on our simulations, the efficiency of this approach is manifested in uncertain environment. The results show that our approach is scalable and can effectively balance aggregating data with minimizing energy consumption.

  1. A Matrix-Based Proactive Data Relay Algorithm for Large Distributed Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yang Xu

    2016-08-01

    Full Text Available In large-scale distributed sensor networks, sensed data is required to be relayed around the network so that one or few sensors can gather adequate relative data to produce high quality information for decision-making. In regards to very high energy-constraint sensor nodes, data transmission should be extremely economical. However, traditional data delivery protocols are potentially inefficient relaying unpredictable sensor readings for data fusion in large distributed networks for either overwhelming query transmissions or unnecessary data coverage. By building sensors’ local model from their previously transmitted data in three matrixes, we have developed a novel energy-saving data relay algorithm, which allows sensors to proactively make broadcast decisions by using a neat matrix computation to provide balance between transmission and energy-saving. In addition, we designed a heuristic maintenance algorithm to efficiently update these three matrices. This can easily be deployed to large-scale mobile networks in which decisions of sensors are based on their local matrix models no matter how large the network is, and the local models of these sensors are updated constantly. Compared with some traditional approaches based on our simulations, the efficiency of this approach is manifested in uncertain environment. The results show that our approach is scalable and can effectively balance aggregating data with minimizing energy consumption.

  2. Analysis of a large-scale weighted network of one-to-one human communication

    International Nuclear Information System (INIS)

    Onnela, Jukka-Pekka; Saramaeki, Jari; Hyvoenen, Joerkki; Szabo, Gabor; Menezes, M Argollo de; Kaski, Kimmo; Barabasi, Albert-Laszlo; Kertesz, Janos

    2007-01-01

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks

  3. Analysis of a large-scale weighted network of one-to-one human communication

    Science.gov (United States)

    Onnela, Jukka-Pekka; Saramäki, Jari; Hyvönen, Jörkki; Szabó, Gábor; Argollo de Menezes, M.; Kaski, Kimmo; Barabási, Albert-László; Kertész, János

    2007-06-01

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks.

  4. Analysis of a large-scale weighted network of one-to-one human communication

    Energy Technology Data Exchange (ETDEWEB)

    Onnela, Jukka-Pekka [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Saramaeki, Jari [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Hyvoenen, Joerkki [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Szabo, Gabor [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Menezes, M Argollo de [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Kaski, Kimmo [Laboratory of Computational Engineering, Helsinki University of Technology (Finland); Barabasi, Albert-Laszlo [Department of Physdics and Center for Complex Networks Research, University of Notre Dame, IN (United States); Kertesz, Janos [Laboratory of Computational Engineering, Helsinki University of Technology (Finland)

    2007-06-15

    We construct a connected network of 3.9 million nodes from mobile phone call records, which can be regarded as a proxy for the underlying human communication network at the societal level. We assign two weights on each edge to reflect the strength of social interaction, which are the aggregate call duration and the cumulative number of calls placed between the individuals over a period of 18 weeks. We present a detailed analysis of this weighted network by examining its degree, strength, and weight distributions, as well as its topological assortativity and weighted assortativity, clustering and weighted clustering, together with correlations between these quantities. We give an account of motif intensity and coherence distributions and compare them to a randomized reference system. We also use the concept of link overlap to measure the number of common neighbours any two adjacent nodes have, which serves as a useful local measure for identifying the interconnectedness of communities. We report a positive correlation between the overlap and weight of a link, thus providing strong quantitative evidence for the weak ties hypothesis, a central concept in social network analysis. The percolation properties of the network are found to depend on the type and order of removed links, and they can help understand how the local structure of the network manifests itself at the global level. We hope that our results will contribute to modelling weighted large-scale social networks, and believe that the systematic approach followed here can be adopted to study other weighted networks.

  5. Non-parametric co-clustering of large scale sparse bipartite networks on the GPU

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Mørup, Morten; Hansen, Lars Kai

    2011-01-01

    of row and column clusters from a hypothesis space of an infinite number of clusters. To reach large scale applications of co-clustering we exploit that parameter inference for co-clustering is well suited for parallel computing. We develop a generic GPU framework for efficient inference on large scale...... sparse bipartite networks and achieve a speedup of two orders of magnitude compared to estimation based on conventional CPUs. In terms of scalability we find for networks with more than 100 million links that reliable inference can be achieved in less than an hour on a single GPU. To efficiently manage...

  6. Role of Delays in Shaping Spatiotemporal Dynamics of Neuronal Activity in Large Networks

    International Nuclear Information System (INIS)

    Roxin, Alex; Brunel, Nicolas; Hansel, David

    2005-01-01

    We study the effect of delays on the dynamics of large networks of neurons. We show that delays give rise to a wealth of bifurcations and to a rich phase diagram, which includes oscillatory bumps, traveling waves, lurching waves, standing waves arising via a period-doubling bifurcation, aperiodic regimes, and regimes of multistability. We study the existence and the stability of the various dynamical patterns analytically and numerically in a simplified rate model as a function of the interaction parameters. The results derived in that framework allow us to understand the origin of the diversity of dynamical states observed in large networks of spiking neurons

  7. Increasing the appeal and utilization of services for alcohol and drug problems: what consumers and their social networks prefer.

    Science.gov (United States)

    Tucker, Jalie A; Foushee, H Russell; Simpson, Cathy A

    2009-01-01

    A large gap exists in the United States between population need and the utilization of treatment services for substance-related problems. Surveying consumer preferences may provide valuable information for developing more attractive services with greater reach and impact on population health. A state-level telephone survey using random digit dialling sampling methods assessed preferences for available professional, mutual help, and lay resources, as well as innovative computerized and self-help resources that enhance anonymity (N=439 households in Alabama). Respondents preferred help that involved personal contact compared to computerized help or self-help, but were indifferent whether personalized help was dispensed by professional or lay providers. Attractive service features included lower cost, insurance coverage, confidentiality, rapid and convenient appointments, and addressing functional problems and risks of substance misuse. Respondents in households with a member who misused substances rated services more negatively, especially if services had been used. The findings highlight the utility of viewing substance misusers and their social networks as consumers, and the implications for improving the system of care and for designing and marketing services that are responsive to user preferences are discussed.

  8. How Did the Information Flow in the #AlphaGo Hashtag Network? A Social Network Analysis of the Large-Scale Information Network on Twitter.

    Science.gov (United States)

    Kim, Jinyoung

    2017-12-01

    As it becomes common for Internet users to use hashtags when posting and searching information on social media, it is important to understand who builds a hashtag network and how information is circulated within the network. This article focused on unlocking the potential of the #AlphaGo hashtag network by addressing the following questions. First, the current study examined whether traditional opinion leadership (i.e., the influentials hypothesis) or grassroot participation by the public (i.e., the interpersonal hypothesis) drove dissemination of information in the hashtag network. Second, several unique patterns of information distribution by key users were identified. Finally, the association between attributes of key users who exerted great influence on information distribution (i.e., the number of followers and follows) and their central status in the network was tested. To answer the proffered research questions, a social network analysis was conducted using a large-scale hashtag network data set from Twitter (n = 21,870). The results showed that the leading actors in the network were actively receiving information from their followers rather than serving as intermediaries between the original information sources and the public. Moreover, the leading actors played several roles (i.e., conversation starters, influencers, and active engagers) in the network. Furthermore, the number of their follows and followers were significantly associated with their central status in the hashtag network. Based on the results, the current research explained how the information was exchanged in the hashtag network by proposing the reciprocal model of information flow.

  9. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  10. Large-scale brain network coupling predicts acute nicotine abstinence effects on craving and cognitive function.

    Science.gov (United States)

    Lerman, Caryn; Gu, Hong; Loughead, James; Ruparel, Kosha; Yang, Yihong; Stein, Elliot A

    2014-05-01

    Interactions of large-scale brain networks may underlie cognitive dysfunctions in psychiatric and addictive disorders. To test the hypothesis that the strength of coupling among 3 large-scale brain networks--salience, executive control, and default mode--will reflect the state of nicotine withdrawal (vs smoking satiety) and will predict abstinence-induced craving and cognitive deficits and to develop a resource allocation index (RAI) that reflects the combined strength of interactions among the 3 large-scale networks. A within-subject functional magnetic resonance imaging study in an academic medical center compared resting-state functional connectivity coherence strength after 24 hours of abstinence and after smoking satiety. We examined the relationship of abstinence-induced changes in the RAI with alterations in subjective, behavioral, and neural functions. We included 37 healthy smoking volunteers, aged 19 to 61 years, for analyses. Twenty-four hours of abstinence vs smoking satiety. Inter-network connectivity strength (primary) and the relationship with subjective, behavioral, and neural measures of nicotine withdrawal during abstinence vs smoking satiety states (secondary). The RAI was significantly lower in the abstinent compared with the smoking satiety states (left RAI, P = .002; right RAI, P = .04), suggesting weaker inhibition between the default mode and salience networks. Weaker inter-network connectivity (reduced RAI) predicted abstinence-induced cravings to smoke (r = -0.59; P = .007) and less suppression of default mode activity during performance of a subsequent working memory task (ventromedial prefrontal cortex, r = -0.66, P = .003; posterior cingulate cortex, r = -0.65, P = .001). Alterations in coupling of the salience and default mode networks and the inability to disengage from the default mode network may be critical in cognitive/affective alterations that underlie nicotine dependence.

  11. Laparoscopic Removal of a Large Ovarian Mass Utilizing Planned Trocar Puncture

    OpenAIRE

    Stitely, Michael L.

    2012-01-01

    Background: Large cystic ovarian masses pose technical challenges to the laparoscopic surgeon. Removing large, potentially malignant specimens must be done with care to avoid the leakage of cyst fluid into the abdominal cavity. Case: We present the case of a large ovarian cystic mass treated laparoscopically with intentional trocar puncture of the mass to drain and remove the mass. Discussion: Large cystic ovarian masses can be removed laparoscopically with intentional trocar puncture of the ...

  12. Limits to the development of feed-forward structures in large recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2011-02-01

    Full Text Available Spike-timing dependent plasticity (STDP has traditionally been of great interest to theoreticians, as it seems to provide an answer to the question of how the brain can develop functional structure in response to repeated stimuli. However, despite this high level of interest, convincing demonstrations of this capacity in large, initially random networks have not been forthcoming. Such demonstrations as there are typically rely on constraining the problem artificially. Techniques include employing additional pruning mechanisms or STDP rules that enhance symmetry breaking, simulating networks with low connectivity that magnify competition between synapses, or combinations of the above. In this paper we first review modeling choices that carry particularly high risks of producing non-generalizable results in the context of STDP in recurrent networks. We then develop a theory for the development of feed-forward structure in random networks and conclude that an unstable fixed point in the dynamics prevents the stable propagation of structure in recurrent networks with weight-dependent STDP. We demonstrate that the key predictions of the theory hold in large-scale simulations. The theory provides insight into the reasons why such development does not take place in unconstrained systems and enables us to identify candidate biologically motivated adaptations to the balanced random network model that might enable it.

  13. Large scale silver nanowires network fabricated by MeV hydrogen (H+) ion beam irradiation

    International Nuclear Information System (INIS)

    S, Honey; S, Naseem; A, Ishaq; M, Maaza; M T, Bhatti; D, Wan

    2016-01-01

    A random two-dimensional large scale nano-network of silver nanowires (Ag-NWs) is fabricated by MeV hydrogen (H + ) ion beam irradiation. Ag-NWs are irradiated under H +  ion beam at different ion fluences at room temperature. The Ag-NW network is fabricated by H + ion beam-induced welding of Ag-NWs at intersecting positions. H +  ion beam induced welding is confirmed by transmission electron microscopy (TEM) and scanning electron microscopy (SEM). Moreover, the structure of Ag NWs remains stable under H +  ion beam, and networks are optically transparent. Morphology also remains stable under H +  ion beam irradiation. No slicings or cuttings of Ag-NWs are observed under MeV H +  ion beam irradiation. The results exhibit that the formation of Ag-NW network proceeds through three steps: ion beam induced thermal spikes lead to the local heating of Ag-NWs, the formation of simple junctions on small scale, and the formation of a large scale network. This observation is useful for using Ag-NWs based devices in upper space where protons are abandoned in an energy range from MeV to GeV. This high-quality Ag-NW network can also be used as a transparent electrode for optoelectronics devices. (paper)

  14. Directed partial correlation: inferring large-scale gene regulatory network through induced topology disruptions.

    Directory of Open Access Journals (Sweden)

    Yinyin Yuan

    Full Text Available Inferring regulatory relationships among many genes based on their temporal variation in transcript abundance has been a popular research topic. Due to the nature of microarray experiments, classical tools for time series analysis lose power since the number of variables far exceeds the number of the samples. In this paper, we describe some of the existing multivariate inference techniques that are applicable to hundreds of variables and show the potential challenges for small-sample, large-scale data. We propose a directed partial correlation (DPC method as an efficient and effective solution to regulatory network inference using these data. Specifically for genomic data, the proposed method is designed to deal with large-scale datasets. It combines the efficiency of partial correlation for setting up network topology by testing conditional independence, and the concept of Granger causality to assess topology change with induced interruptions. The idea is that when a transcription factor is induced artificially within a gene network, the disruption of the network by the induction signifies a genes role in transcriptional regulation. The benchmarking results using GeneNetWeaver, the simulator for the DREAM challenges, provide strong evidence of the outstanding performance of the proposed DPC method. When applied to real biological data, the inferred starch metabolism network in Arabidopsis reveals many biologically meaningful network modules worthy of further investigation. These results collectively suggest DPC is a versatile tool for genomics research. The R package DPC is available for download (http://code.google.com/p/dpcnet/.

  15. Quality of electric service in utility distribution networks under electromagnetic compatibility principles. [ENEL

    Energy Technology Data Exchange (ETDEWEB)

    Chizzolini, P.; Lagostena, L.; Mirra, C.; Sani, G. (ENEL, Rome Milan (Italy))

    1989-03-01

    The development of electromagnetic compatibility criteria, being worked out in international standardization activities, requires the establishment of the characteristics of public utility distribution networks as a reference ambient. This is necessary for gauging the immunity levels towards users and for defining the disturbance emission limits. Therefore, it is a new way to look at the quality of electric service. Consequently, it is necessary to check and specify, in an homogeneous manner, the phenomena that affect electric service. Use must be made of experimental tests and of the collection and elaboration of operation data. In addition to testing techniques, this paper describes the checking procedures for the quality of electric service as they are implemented in the information system developed by ENEL (Italian Electricity Board) for distribution activities. The first reference data obtained from the national and international fields about voltage shape and supply continuity are also indicated.

  16. Renewable Resources: a national catalog of model projects. Volume 4. Western Solar Utilization Network Region

    Energy Technology Data Exchange (ETDEWEB)

    None

    1980-07-01

    This compilation of diverse conservation and renewable energy projects across the United States was prepared through the enthusiastic participation of solar and alternate energy groups from every state and region. Compiled and edited by the Center for Renewable Resources, these projects reflect many levels of innovation and technical expertise. In many cases, a critique analysis is presented of how projects performed and of the institutional conditions associated with their success or failure. Some 2000 projects are included in this compilation; most have worked, some have not. Information about all is presented to aid learning from these experiences. The four volumes in this set are arranged in state sections by geographic region, coinciding with the four Regional Solar Energy Centers. The table of contents is organized by project category so that maximum cross-referencing may be obtained. This volume includes information on the Western Solar Utilization Network Region. (WHK)

  17. Utilization of lunar materials and expertise for large scale operations in space: Abstracts. [lunar bases and space industrialization

    Science.gov (United States)

    Criswell, D. R. (Editor)

    1976-01-01

    The practicality of exploiting the moon, not only as a source of materials for large habitable structures at Lagrangian points, but also as a base for colonization is discussed in abstracts of papers presented at a special session on lunar utilization. Questions and answers which followed each presentation are included after the appropriate abstract. Author and subject indexes are provided.

  18. Active self-testing noise measurement sensors for large-scale environmental sensor networks.

    Science.gov (United States)

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-12-13

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10.

  19. Large-Scale Cooperative Task Distribution on Peer-to-Peer Networks

    Science.gov (United States)

    2012-01-01

    SUBTITLE Large-scale cooperative task distribution on peer-to-peer networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6...disadvantages of ML- Chord are its fixed size (two layers), and limited scala - bility for large-scale systems. RC-Chord extends ML- D. Karrels et al...configurable before runtime. This can be improved by incorporating a distributed learning algorithm to tune the number and range of the DLoE tracking

  20. A model-based eco-routing strategy for electric vehicles in large urban networks

    OpenAIRE

    De Nunzio , Giovanni; Thibault , Laurent; Sciarretta , Antonio

    2016-01-01

    International audience; A novel eco-routing navigation strategy and energy consumption modeling approach for electric vehicles are presented in this work. Speed fluctuations and road network infrastructure have a large impact on vehicular energy consumption. Neglecting these effects may lead to large errors in eco-routing navigation, which could trivially select the route with the lowest average speed. We propose an energy consumption model that considers both accelerations and impact of the ...

  1. On a digital wireless impact-monitoring network for large-scale composite structures

    International Nuclear Information System (INIS)

    Yuan, Shenfang; Mei, Hanfei; Qiu, Lei; Ren, Yuanqiang

    2014-01-01

    Impact, which may occur during manufacture, service or maintenance, is one of the major concerns to be monitored throughout the lifetime of aircraft composite structures. Aiming at monitoring impacts online while minimizing the weight added to the aircraft to meet the strict limitations of aerospace engineering, this paper puts forward a new digital wireless network based on miniaturized wireless digital impact-monitoring nodes developed for large-scale composite structures. In addition to investigations on the design methods of the network architecture, time synchronization and implementation method, a conflict resolution method based on the feature parameters of digital sequences is first presented to address impact localization conflicts when several nodes are arranged close together. To verify the feasibility and stability of the wireless network, experiments are performed on a complex aircraft composite wing box and an unmanned aerial vehicle (UAV) composite wing. Experimental results show the successful design of the presented network. (paper)

  2. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning

    Directory of Open Access Journals (Sweden)

    Yang Liu

    2015-01-01

    Full Text Available Artificial neural networks (ANNs have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  3. MapReduce Based Parallel Neural Networks in Enabling Large Scale Machine Learning.

    Science.gov (United States)

    Liu, Yang; Yang, Jie; Huang, Yuan; Xu, Lixiong; Li, Siguang; Qi, Man

    2015-01-01

    Artificial neural networks (ANNs) have been widely used in pattern recognition and classification applications. However, ANNs are notably slow in computation especially when the size of data is large. Nowadays, big data has received a momentum from both industry and academia. To fulfill the potentials of ANNs for big data applications, the computation process must be speeded up. For this purpose, this paper parallelizes neural networks based on MapReduce, which has become a major computing model to facilitate data intensive applications. Three data intensive scenarios are considered in the parallelization process in terms of the volume of classification data, the size of the training data, and the number of neurons in the neural network. The performance of the parallelized neural networks is evaluated in an experimental MapReduce computer cluster from the aspects of accuracy in classification and efficiency in computation.

  4. Fast and accurate detection of spread source in large complex networks.

    Science.gov (United States)

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  5. Congenital blindness is associated with large-scale reorganization of anatomical networks.

    Science.gov (United States)

    Hasson, Uri; Andric, Michael; Atilgan, Hicret; Collignon, Olivier

    2016-03-01

    Blindness is a unique model for understanding the role of experience in the development of the brain's functional and anatomical architecture. Documenting changes in the structure of anatomical networks for this population would substantiate the notion that the brain's core network-level organization may undergo neuroplasticity as a result of life-long experience. To examine this issue, we compared whole-brain networks of regional cortical-thickness covariance in early blind and matched sighted individuals. This covariance is thought to reflect signatures of integration between systems involved in similar perceptual/cognitive functions. Using graph-theoretic metrics, we identified a unique mode of anatomical reorganization in the blind that differed from that found for sighted. This was seen in that network partition structures derived from subgroups of blind were more similar to each other than they were to partitions derived from sighted. Notably, after deriving network partitions, we found that language and visual regions tended to reside within separate modules in sighted but showed a pattern of merging into shared modules in the blind. Our study demonstrates that early visual deprivation triggers a systematic large-scale reorganization of whole-brain cortical-thickness networks, suggesting changes in how occipital regions interface with other functional networks in the congenitally blind. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  6. High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.

    Science.gov (United States)

    Andras, Peter

    2018-02-01

    Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.

  7. Supplier-independent fuel management from the viewpoint of a large German utility - Objectives and experience

    International Nuclear Information System (INIS)

    Kallmeyer, D.H.; Petersen, K.

    1986-01-01

    Internationally, many nuclear power operating utilities tend to have alternative fuel supplies and as a consequence also have to build up a supplier-independent fuel management. The main reasons are competition within the fuel assembly market as a price-regulating mechanism; quality comparison and corresponding improvements in fuel technology, documentation system and licensing support activities; improved necessity on the part of the manufacturer to be open for innovations; improved supply reliability by redundant manufacturer qualification; and gain of know-how for the utility by collection and comparison of the state of the art of the different suppliers in the important technical and physical disciplines

  8. Energetic and Economic Assessment of Pipe Network Effects on Unused Energy Source System Performance in Large-Scale Horticulture Facilities

    Directory of Open Access Journals (Sweden)

    Jae Ho Lee

    2015-04-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, the introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. This study investigated the effects of the distance between the greenhouse and the actual heat source by taking into account the heat transfer taking place inside the pipe network. This study considered CO2 emissions and economic aspects to determine the optimal heat source. Payback period analysis against initial investment cost shows that a heat pump based on a power plant’s waste heat has the shortest payback period of 7.69 years at a distance of 0 km. On the other hand, the payback period of a heat pump based on geothermal heat showed the shortest payback period of 10.17 year at the distance of 5 km, indicating that heat pumps utilizing geothermal heat were the most effective model if the heat transfer inside the pipe network between the greenhouse and the actual heat source is taken into account.

  9. A GPU-based solution for fast calculation of the betweenness centrality in large weighted networks

    Directory of Open Access Journals (Sweden)

    Rui Fan

    2017-12-01

    Full Text Available Betweenness, a widely employed centrality measure in network science, is a decent proxy for investigating network loads and rankings. However, its extremely high computational cost greatly hinders its applicability in large networks. Although several parallel algorithms have been presented to reduce its calculation cost for unweighted networks, a fast solution for weighted networks, which are commonly encountered in many realistic applications, is still lacking. In this study, we develop an efficient parallel GPU-based approach to boost the calculation of the betweenness centrality (BC for large weighted networks. We parallelize the traditional Dijkstra algorithm by selecting more than one frontier vertex each time and then inspecting the frontier vertices simultaneously. By combining the parallel SSSP algorithm with the parallel BC framework, our GPU-based betweenness algorithm achieves much better performance than its CPU counterparts. Moreover, to further improve performance, we integrate the work-efficient strategy, and to address the load-imbalance problem, we introduce a warp-centric technique, which assigns many threads rather than one to a single frontier vertex. Experiments on both realistic and synthetic networks demonstrate the efficiency of our solution, which achieves 2.9× to 8.44× speedups over the parallel CPU implementation. Our algorithm is open-source and free to the community; it is publicly available through https://dx.doi.org/10.6084/m9.figshare.4542405. Considering the pervasive deployment and declining price of GPUs in personal computers and servers, our solution will offer unprecedented opportunities for exploring betweenness-related problems and will motivate follow-up efforts in network science.

  10. Towards a Versatile Problem Diagnosis Infrastructure for LargeWireless Sensor Networks

    NARCIS (Netherlands)

    Iwanicki, Konrad; Steen, van Maarten

    2007-01-01

    In this position paper, we address the issue of durable maintenance of a wireless sensor network, which will be crucial if the vision of large, long-lived sensornets is to become reality. Durable maintenance requires tools for diagnosing and fixing occurring problems, which can range from

  11. Extraction of drainage networks from large terrain datasets using high throughput computing

    Science.gov (United States)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  12. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Science.gov (United States)

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  13. Received signal strength in large-scale wireless relay sensor network: a stochastic ray approach

    NARCIS (Netherlands)

    Hu, L.; Chen, Y.; Scanlon, W.G.

    2011-01-01

    The authors consider a point percolation lattice representation of a large-scale wireless relay sensor network (WRSN) deployed in a cluttered environment. Each relay sensor corresponds to a grid point in the random lattice and the signal sent by the source is modelled as an ensemble of photons that

  14. Node localization algorithm of wireless sensor networks for large electrical equipment monitoring application

    DEFF Research Database (Denmark)

    Chen, Qinyin; Hu, Y.; Chen, Zhe

    2016-01-01

    Node localization technology is an important technology for the Wireless Sensor Networks (WSNs) applications. An improved 3D node localization algorithm is proposed in this paper, which is based on a Multi-dimensional Scaling (MDS) node localization algorithm for large electrical equipment monito...

  15. Par@Graph - a parallel toolbox for the construction and analysis of large complex climate networks

    NARCIS (Netherlands)

    Tantet, A.J.J.

    2015-01-01

    In this paper, we present Par@Graph, a software toolbox to reconstruct and analyze complex climate networks having a large number of nodes (up to at least 106) and edges (up to at least 1012). The key innovation is an efficient set of parallel software tools designed to leverage the inherited hybrid

  16. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  17. Streaming Parallel GPU Acceleration of Large-Scale filter-based Spiking Neural Networks

    NARCIS (Netherlands)

    L.P. Slazynski (Leszek); S.M. Bohte (Sander)

    2012-01-01

    htmlabstractThe arrival of graphics processing (GPU) cards suitable for massively parallel computing promises a↵ordable large-scale neural network simulation previously only available at supercomputing facil- ities. While the raw numbers suggest that GPUs may outperform CPUs by at least an order of

  18. Local, distributed topology control for large-scale wireless ad-hoc networks

    NARCIS (Netherlands)

    Nieberg, T.; Hurink, Johann L.

    In this document, topology control of a large-scale, wireless network by a distributed algorithm that uses only locally available information is presented. Topology control algorithms adjust the transmission power of wireless nodes to create a desired topology. The algorithm, named local power

  19. The Use of Online Social Networks by Polish Former Erasmus Students: A Large-Scale Survey

    Science.gov (United States)

    Bryla, Pawel

    2014-01-01

    There is an increasing role of online social networks in the life of young Poles. We conducted a large-scale survey among Polish former Erasmus students. We have received 2450 completed questionnaires from alumni of 115 higher education institutions all over Poland. 85.4% of our respondents reported they kept in touch with their former Erasmus…

  20. Largenet2: an object-oriented programming library for simulating large adaptive networks.

    Science.gov (United States)

    Zschaler, Gerd; Gross, Thilo

    2013-01-15

    The largenet2 C++ library provides an infrastructure for the simulation of large dynamic and adaptive networks with discrete node and link states. The library is released as free software. It is available at http://biond.github.com/largenet2. Largenet2 is licensed under the Creative Commons Attribution-NonCommercial 3.0 Unported License. gerd@biond.org

  1. Cooperative Caching in Mobile Ad Hoc Networks Based on Data Utility

    Directory of Open Access Journals (Sweden)

    Narottam Chand

    2007-01-01

    Full Text Available Cooperative caching, which allows sharing and coordination of cached data among clients, is a potential technique to improve the data access performance and availability in mobile ad hoc networks. However, variable data sizes, frequent data updates, limited client resources, insufficient wireless bandwidth and client's mobility make cache management a challenge. In this paper, we propose a utility based cache replacement policy, least utility value (LUV, to improve the data availability and reduce the local cache miss ratio. LUV considers several factors that affect cache performance, namely access probability, distance between the requester and data source/cache, coherency and data size. A cooperative cache management strategy, Zone Cooperative (ZC, is developed that employs LUV as replacement policy. In ZC one-hop neighbors of a client form a cooperation zone since the cost for communication with them is low both in terms of energy consumption and message exchange. Simulation experiments have been conducted to evaluate the performance of LUV based ZC caching strategy. The simulation results show that, LUV replacement policy substantially outperforms the LRU policy.

  2. Laparoscopic Removal of a Large Ovarian Mass Utilizing Planned Trocar Puncture

    Science.gov (United States)

    2012-01-01

    Background: Large cystic ovarian masses pose technical challenges to the laparoscopic surgeon. Removing large, potentially malignant specimens must be done with care to avoid the leakage of cyst fluid into the abdominal cavity. Case: We present the case of a large ovarian cystic mass treated laparoscopically with intentional trocar puncture of the mass to drain and remove the mass. Discussion: Large cystic ovarian masses can be removed laparoscopically with intentional trocar puncture of the mass to facilitate removal without leakage of cyst fluid. PMID:22906344

  3. Limitations of demand- and pressure-driven modeling for large deficient networks

    Science.gov (United States)

    Braun, Mathias; Piller, Olivier; Deuerlein, Jochen; Mortazavi, Iraj

    2017-10-01

    The calculation of hydraulic state variables for a network is an important task in managing the distribution of potable water. Over the years the mathematical modeling process has been improved by numerous researchers for utilization in new computer applications and the more realistic modeling of water distribution networks. But, in spite of these continuous advances, there are still a number of physical phenomena that may not be tackled correctly by current models. This paper will take a closer look at the two modeling paradigms given by demand- and pressure-driven modeling. The basic equations are introduced and parallels are drawn with the optimization formulations from electrical engineering. These formulations guarantee the existence and uniqueness of the solution. One of the central questions of the French and German research project ResiWater is the investigation of the network resilience in the case of extreme events or disasters. Under such extraordinary conditions where models are pushed beyond their limits, we talk about deficient network models. Examples of deficient networks are given by highly regulated flow, leakage or pipe bursts and cases where pressure falls below the vapor pressure of water. These examples will be presented and analyzed on the solvability and physical correctness of the solution with respect to demand- and pressure-driven models.

  4. Large-scale grid management

    International Nuclear Information System (INIS)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-01-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series

  5. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  6. Network theory-based analysis of risk interactions in large engineering projects

    International Nuclear Information System (INIS)

    Fang, Chao; Marle, Franck; Zio, Enrico; Bocquet, Jean-Claude

    2012-01-01

    This paper presents an approach based on network theory to deal with risk interactions in large engineering projects. Indeed, such projects are exposed to numerous and interdependent risks of various nature, which makes their management more difficult. In this paper, a topological analysis based on network theory is presented, which aims at identifying key elements in the structure of interrelated risks potentially affecting a large engineering project. This analysis serves as a powerful complement to classical project risk analysis. Its originality lies in the application of some network theory indicators to the project risk management field. The construction of the risk network requires the involvement of the project manager and other team members assigned to the risk management process. Its interpretation improves their understanding of risks and their potential interactions. The outcomes of the analysis provide a support for decision-making regarding project risk management. An example of application to a real large engineering project is presented. The conclusion is that some new insights can be found about risks, about their interactions and about the global potential behavior of the project. - Highlights: ► The method addresses the modeling of complexity in project risk analysis. ► Network theory indicators enable other risks than classical criticality analysis to be highlighted. ► This topological analysis improves project manager's understanding of risks and risk interactions. ► This helps project manager to make decisions considering the position in the risk network. ► An application to a real tramway implementation project in a city is provided.

  7. Identifying influential nodes in large-scale directed networks: the role of clustering.

    Science.gov (United States)

    Chen, Duan-Bing; Gao, Hui; Lü, Linyuan; Zhou, Tao

    2013-01-01

    Identifying influential nodes in very large-scale directed networks is a big challenge relevant to disparate applications, such as accelerating information propagation, controlling rumors and diseases, designing search engines, and understanding hierarchical organization of social and biological networks. Known methods range from node centralities, such as degree, closeness and betweenness, to diffusion-based processes, like PageRank and LeaderRank. Some of these methods already take into account the influences of a node's neighbors but do not directly make use of the interactions among it's neighbors. Local clustering is known to have negative impacts on the information spreading. We further show empirically that it also plays a negative role in generating local connections. Inspired by these facts, we propose a local ranking algorithm named ClusterRank, which takes into account not only the number of neighbors and the neighbors' influences, but also the clustering coefficient. Subject to the susceptible-infected-recovered (SIR) spreading model with constant infectivity, experimental results on two directed networks, a social network extracted from delicious.com and a large-scale short-message communication network, demonstrate that the ClusterRank outperforms some benchmark algorithms such as PageRank and LeaderRank. Furthermore, ClusterRank can also be applied to undirected networks where the superiority of ClusterRank is significant compared with degree centrality and k-core decomposition. In addition, ClusterRank, only making use of local information, is much more efficient than global methods: It takes only 191 seconds for a network with about [Formula: see text] nodes, more than 15 times faster than PageRank.

  8. Identifying influential nodes in large-scale directed networks: the role of clustering.

    Directory of Open Access Journals (Sweden)

    Duan-Bing Chen

    Full Text Available Identifying influential nodes in very large-scale directed networks is a big challenge relevant to disparate applications, such as accelerating information propagation, controlling rumors and diseases, designing search engines, and understanding hierarchical organization of social and biological networks. Known methods range from node centralities, such as degree, closeness and betweenness, to diffusion-based processes, like PageRank and LeaderRank. Some of these methods already take into account the influences of a node's neighbors but do not directly make use of the interactions among it's neighbors. Local clustering is known to have negative impacts on the information spreading. We further show empirically that it also plays a negative role in generating local connections. Inspired by these facts, we propose a local ranking algorithm named ClusterRank, which takes into account not only the number of neighbors and the neighbors' influences, but also the clustering coefficient. Subject to the susceptible-infected-recovered (SIR spreading model with constant infectivity, experimental results on two directed networks, a social network extracted from delicious.com and a large-scale short-message communication network, demonstrate that the ClusterRank outperforms some benchmark algorithms such as PageRank and LeaderRank. Furthermore, ClusterRank can also be applied to undirected networks where the superiority of ClusterRank is significant compared with degree centrality and k-core decomposition. In addition, ClusterRank, only making use of local information, is much more efficient than global methods: It takes only 191 seconds for a network with about [Formula: see text] nodes, more than 15 times faster than PageRank.

  9. ShakeNet: a portable wireless sensor network for instrumenting large civil structures

    Science.gov (United States)

    Kohler, Monica D.; Hao, Shuai; Mishra, Nilesh; Govindan, Ramesh; Nigbor, Robert

    2015-08-03

    We report our findings from a U.S. Geological Survey (USGS) National Earthquake Hazards Reduction Program-funded project to develop and test a wireless, portable, strong-motion network of up to 40 triaxial accelerometers for structural health monitoring. The overall goal of the project was to record ambient vibrations for several days from USGS-instrumented structures. Structural health monitoring has important applications in fields like civil engineering and the study of earthquakes. The emergence of wireless sensor networks provides a promising means to such applications. However, while most wireless sensor networks are still in the experimentation stage, very few take into consideration the realistic earthquake engineering application requirements. To collect comprehensive data for structural health monitoring for civil engineers, high-resolution vibration sensors and sufficient sampling rates should be adopted, which makes it challenging for current wireless sensor network technology in the following ways: processing capabilities, storage limit, and communication bandwidth. The wireless sensor network has to meet expectations set by wired sensor devices prevalent in the structural health monitoring community. For this project, we built and tested an application-realistic, commercially based, portable, wireless sensor network called ShakeNet for instrumentation of large civil structures, especially for buildings, bridges, or dams after earthquakes. Two to three people can deploy ShakeNet sensors within hours after an earthquake to measure the structural response of the building or bridge during aftershocks. ShakeNet involved the development of a new sensing platform (ShakeBox) running a software suite for networking, data collection, and monitoring. Deployments reported here on a tall building and a large dam were real-world tests of ShakeNet operation, and helped to refine both hardware and software. 

  10. Large-scale brain networks are distinctly affected in right and left mesial temporal lobe epilepsy.

    Science.gov (United States)

    de Campos, Brunno Machado; Coan, Ana Carolina; Lin Yasuda, Clarissa; Casseb, Raphael Fernandes; Cendes, Fernando

    2016-09-01

    Mesial temporal lobe epilepsy (MTLE) with hippocampus sclerosis (HS) is associated with functional and structural alterations extending beyond the temporal regions and abnormal pattern of brain resting state networks (RSNs) connectivity. We hypothesized that the interaction of large-scale RSNs is differently affected in patients with right- and left-MTLE with HS compared to controls. We aimed to determine and characterize these alterations through the analysis of 12 RSNs, functionally parceled in 70 regions of interest (ROIs), from resting-state functional-MRIs of 99 subjects (52 controls, 26 right- and 21 left-MTLE patients with HS). Image preprocessing and statistical analysis were performed using UF(2) C-toolbox, which provided ROI-wise results for intranetwork and internetwork connectivity. Intranetwork abnormalities were observed in the dorsal default mode network (DMN) in both groups of patients and in the posterior salience network in right-MTLE. Both groups showed abnormal correlation between the dorsal-DMN and the posterior salience, as well as between the dorsal-DMN and the executive-control network. Patients with left-MTLE also showed reduced correlation between the dorsal-DMN and visuospatial network and increased correlation between bilateral thalamus and the posterior salience network. The ipsilateral hippocampus stood out as a central area of abnormalities. Alterations on left-MTLE expressed a low cluster coefficient, whereas the altered connections on right-MTLE showed low cluster coefficient in the DMN but high in the posterior salience regions. Both right- and left-MTLE patients with HS have widespread abnormal interactions of large-scale brain networks; however, all parameters evaluated indicate that left-MTLE has a more intricate bihemispheric dysfunction compared to right-MTLE. Hum Brain Mapp 37:3137-3152, 2016. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by

  11. Identifying Influential Nodes in Large-Scale Directed Networks: The Role of Clustering

    Science.gov (United States)

    Chen, Duan-Bing; Gao, Hui; Lü, Linyuan; Zhou, Tao

    2013-01-01

    Identifying influential nodes in very large-scale directed networks is a big challenge relevant to disparate applications, such as accelerating information propagation, controlling rumors and diseases, designing search engines, and understanding hierarchical organization of social and biological networks. Known methods range from node centralities, such as degree, closeness and betweenness, to diffusion-based processes, like PageRank and LeaderRank. Some of these methods already take into account the influences of a node’s neighbors but do not directly make use of the interactions among it’s neighbors. Local clustering is known to have negative impacts on the information spreading. We further show empirically that it also plays a negative role in generating local connections. Inspired by these facts, we propose a local ranking algorithm named ClusterRank, which takes into account not only the number of neighbors and the neighbors’ influences, but also the clustering coefficient. Subject to the susceptible-infected-recovered (SIR) spreading model with constant infectivity, experimental results on two directed networks, a social network extracted from delicious.com and a large-scale short-message communication network, demonstrate that the ClusterRank outperforms some benchmark algorithms such as PageRank and LeaderRank. Furthermore, ClusterRank can also be applied to undirected networks where the superiority of ClusterRank is significant compared with degree centrality and k-core decomposition. In addition, ClusterRank, only making use of local information, is much more efficient than global methods: It takes only 191 seconds for a network with about nodes, more than 15 times faster than PageRank. PMID:24204833

  12. Public utilities in networks: competition perspectives and new regulations; Services publics en reseau: perspectives de concurrence et nouvelles regulations

    Energy Technology Data Exchange (ETDEWEB)

    Bergougnoux, J

    2000-07-01

    This report makes first a status about the historical specificities, the present day situation and the perspectives of evolution of public utilities in networks with respect to the European directive of 1996 and to the 4 sectors of electricity, gas, railway transport and postal service. Then, it wonders about the new institutions and regulation procedures to implement to conciliate the public utility mission with the honest competition. (J.S.)

  13. The Relationship of Policymaking and Networking Characteristics among Leaders of Large Urban Health Departments.

    Science.gov (United States)

    Leider, Jonathon P; Castrucci, Brian C; Harris, Jenine K; Hearne, Shelley

    2015-08-06

    The relationship between policy networks and policy development among local health departments (LHDs) is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer) in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD). Connectedness was highest among local health officials (density = .55), and slightly lower for chief science officers (d = .33) and chiefs of policy (d = .29). After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic) and tenure were the most significant predictors of formation of network ties. Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff.

  14. The Relationship of Policymaking and Networking Characteristics among Leaders of Large Urban Health Departments

    Directory of Open Access Journals (Sweden)

    Jonathon P. Leider

    2015-08-01

    Full Text Available Background: The relationship between policy networks and policy development among local health departments (LHDs is a growing area of interest to public health practitioners and researchers alike. In this study, we examine policy activity and ties between public health leadership across large urban health departments. Methods: This study uses data from a national profile of local health departments as well as responses from a survey sent to three staff members (local health official, chief of policy, chief science officer in each of 16 urban health departments in the United States. Network questions related to frequency of contact with health department personnel in other cities. Using exponential random graph models, network density and centrality were examined, as were patterns of communication among those working on several policy areas using exponential random graph models. Results: All 16 LHDs were active in communicating about chronic disease as well as about use of alcohol, tobacco, and other drugs (ATOD. Connectedness was highest among local health officials (density = .55, and slightly lower for chief science officers (d = .33 and chiefs of policy (d = .29. After accounting for organizational characteristics, policy homophily (i.e., when two network members match on a single characteristic and tenure were the most significant predictors of formation of network ties. Conclusion: Networking across health departments has the potential for accelerating the adoption of public health policies. This study suggests similar policy interests and formation of connections among senior leadership can potentially drive greater connectedness among other staff.

  15. Socio-Cognitive Phenotypes Differentially Modulate Large-Scale Structural Covariance Networks.

    Science.gov (United States)

    Valk, Sofie L; Bernhardt, Boris C; Böckler, Anne; Trautwein, Fynn-Mathis; Kanske, Philipp; Singer, Tania

    2017-02-01

    Functional neuroimaging studies have suggested the existence of 2 largely distinct social cognition networks, one for theory of mind (taking others' cognitive perspective) and another for empathy (sharing others' affective states). To address whether these networks can also be dissociated at the level of brain structure, we combined behavioral phenotyping across multiple socio-cognitive tasks with 3-Tesla MRI cortical thickness and structural covariance analysis in 270 healthy adults, recruited across 2 sites. Regional thickness mapping only provided partial support for divergent substrates, highlighting that individual differences in empathy relate to left insular-opercular thickness while no correlation between thickness and mentalizing scores was found. Conversely, structural covariance analysis showed clearly divergent network modulations by socio-cognitive and -affective phenotypes. Specifically, individual differences in theory of mind related to structural integration between temporo-parietal and dorsomedial prefrontal regions while empathy modulated the strength of dorsal anterior insula networks. Findings were robust across both recruitment sites, suggesting generalizability. At the level of structural network embedding, our study provides a double dissociation between empathy and mentalizing. Moreover, our findings suggest that structural substrates of higher-order social cognition are reflected rather in interregional networks than in the the local anatomical markup of specific regions per se. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. A new traffic control design method for large networks with signalized intersections

    Science.gov (United States)

    Leininger, G. G.; Colony, D. C.; Seldner, K.

    1979-01-01

    The paper presents a traffic control design technique for application to large traffic networks with signalized intersections. It is shown that the design method adopts a macroscopic viewpoint to establish a new traffic modelling procedure in which vehicle platoons are subdivided into main stream queues and turning queues. Optimization of the signal splits minimizes queue lengths in the steady state condition and improves traffic flow conditions, from the viewpoint of the traveling public. Finally, an application of the design method to a traffic network with thirty-three signalized intersections is used to demonstrate the effectiveness of the proposed technique.

  17. A Hybrid Testbed for Performance Evaluation of Large-Scale Datacenter Networks

    DEFF Research Database (Denmark)

    Pilimon, Artur; Ruepp, Sarah Renée

    2018-01-01

    Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource-intensive enviro......Datacenters (DC) as well as their network interconnects are growing in scale and complexity. They are constantly being challenged in terms of energy and resource utilization efficiency, scalability, availability, reliability and performance requirements. Therefore, these resource......-intensive environments must be properly tested and analyzed in order to make timely upgrades and transformations. However, a limited number of academic institutions and Research and Development companies have access to production scale DC Network (DCN) testing facilities, and resource-limited studies can produce...... misleading or inaccurate results. To address this problem, we introduce an alternative solution, which forms a solid base for a more realistic and comprehensive performance evaluation of different aspects of DCNs. It is based on the System-in-the-loop (SITL) concept, where real commercial DCN equipment...

  18. Harnessing diversity towards the reconstructing of large scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Takeshi Hase

    Full Text Available Elucidating gene regulatory network (GRN from large scale experimental data remains a central challenge in systems biology. Recently, numerous techniques, particularly consensus driven approaches combining different algorithms, have become a potentially promising strategy to infer accurate GRNs. Here, we develop a novel consensus inference algorithm, TopkNet that can integrate multiple algorithms to infer GRNs. Comprehensive performance benchmarking on a cloud computing framework demonstrated that (i a simple strategy to combine many algorithms does not always lead to performance improvement compared to the cost of consensus and (ii TopkNet integrating only high-performance algorithms provide significant performance improvement compared to the best individual algorithms and community prediction. These results suggest that a priori determination of high-performance algorithms is a key to reconstruct an unknown regulatory network. Similarity among gene-expression datasets can be useful to determine potential optimal algorithms for reconstruction of unknown regulatory networks, i.e., if expression-data associated with known regulatory network is similar to that with unknown regulatory network, optimal algorithms determined for the known regulatory network can be repurposed to infer the unknown regulatory network. Based on this observation, we developed a quantitative measure of similarity among gene-expression datasets and demonstrated that, if similarity between the two expression datasets is high, TopkNet integrating algorithms that are optimal for known dataset perform well on the unknown dataset. The consensus framework, TopkNet, together with the similarity measure proposed in this study provides a powerful strategy towards harnessing the wisdom of the crowds in reconstruction of unknown regulatory networks.

  19. Prospects and strategy for large scale utility applications of photovoltaic power systems

    International Nuclear Information System (INIS)

    Vigotti, R.; Lysen, E.; Cole, A.

    1996-01-01

    The status and prospects of photovoltaic (PV) power systems are reviewed. The market diffusion strategy for the application of PV systems by utilities is described, and the mission, objectives and thoughts of the collaboration programme launched among 18 industrialized countries under the framework of the International Energy Agency are highly with particular reference to technology transfer to developing countries. Future sales of PV systems are expected to grow in the short and medium term mainly in the sector of isolated systems. (R.P.)

  20. Timetable-based simulation method for choice set generation in large-scale public transport networks

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær; Anderson, Marie Karen; Nielsen, Otto Anker

    2016-01-01

    The composition and size of the choice sets are a key for the correct estimation of and prediction by route choice models. While existing literature has posed a great deal of attention towards the generation of path choice sets for private transport problems, the same does not apply to public...... transport problems. This study proposes a timetable-based simulation method for generating path choice sets in a multimodal public transport network. Moreover, this study illustrates the feasibility of its implementation by applying the method to reproduce 5131 real-life trips in the Greater Copenhagen Area...... and to assess the choice set quality in a complex multimodal transport network. Results illustrate the applicability of the algorithm and the relevance of the utility specification chosen for the reproduction of real-life path choices. Moreover, results show that the level of stochasticity used in choice set...

  1. Load reduction test method of similarity theory and BP neural networks of large cranes

    Science.gov (United States)

    Yang, Ruigang; Duan, Zhibin; Lu, Yi; Wang, Lei; Xu, Gening

    2016-01-01

    Static load tests are an important means of supervising and detecting a crane's lift capacity. Due to space restrictions, however, there are difficulties and potential danger when testing large bridge cranes. To solve the loading problems of large-tonnage cranes during testing, an equivalency test is proposed based on the similarity theory and BP neural networks. The maximum stress and displacement of a large bridge crane is tested in small loads, combined with the training neural network of a similar structure crane through stress and displacement data which is collected by a physics simulation progressively loaded to a static load test load within the material scope of work. The maximum stress and displacement of a crane under a static load test load can be predicted through the relationship of stress, displacement, and load. By measuring the stress and displacement of small tonnage weights, the stress and displacement of large loads can be predicted, such as the maximum load capacity, which is 1.25 times the rated capacity. Experimental study shows that the load reduction test method can reflect the lift capacity of large bridge cranes. The load shedding predictive analysis for Sanxia 1200 t bridge crane test data indicates that when the load is 1.25 times the rated lifting capacity, the predicted displacement and actual displacement error is zero. The method solves the problem that lifting capacities are difficult to obtain and testing accidents are easily possible when 1.25 times related weight loads are tested for large tonnage cranes.

  2. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    Science.gov (United States)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  3. Modular-multiplex or single large power plants-advantages and disadvantages for utility systems

    International Nuclear Information System (INIS)

    Endicott, R.D.

    1986-01-01

    The question of growing interest in the fusion community is what size and type configuration fusion reactor(s) will lead to the most economical and attractive fusion power plant? There are two sides to this question. One involves how to build the most economical and attractive fusion reactor. This question which requires evaluation of reactor components within the reactor system is being examined at the Fusion Engineering Design Center (FEDC) and elsewhere. The other side involves examining the issues associated with the most economical size and configuration reactor to use. This question requires the evaluation of the changes in cost of service due to different size and configuration reactors on a utility system. The authors objective was to explore the advantages and disadvantages of using modular-multiplex power plants and to illustrate a means of quantifying the tradeoffs. The effort resulted in the identification of the key parameters involved in selecting the optimum size plant for a utility system and a better understanding of the tradeoffs that are possible. This paper discusses this effort in detail

  4. Aeroelastic analysis of an offshore wind turbine: Design and Fatigue Performance of Large Utility-Scale Wind Turbine Blades

    OpenAIRE

    Fossum, Peter Kalsaas

    2012-01-01

    Aeroelastic design and fatigue analysis of large utility-scale wind turbine blades are performed. The applied fatigue model is based on established methods and is incorporated in an iterative numerical design tool for realistic wind turbine blades. All aerodynamic and structural design properties are available in literature. The software tool FAST is used for advanced aero-servo-elastic load calculations and stress-histories are calculated with elementary beam theory.According to wind energy ...

  5. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  6. Large-scale simulations of plastic neural networks on neuromorphic hardware

    Directory of Open Access Journals (Sweden)

    James Courtney Knight

    2016-04-01

    Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.

  7. Forecasting distributions of large federal-lands fires utilizing satellite and gridded weather information

    Science.gov (United States)

    H.K. Preisler; R.E. Burgan; J.C. Eidenshink; J.M. Klaver; R.W. Klaver

    2009-01-01

    The current study presents a statistical model for assessing the skill of fire danger indices and for forecasting the distribution of the expected numbers of large fires over a given region and for the upcoming week. The procedure permits development of daily maps that forecast, for the forthcoming week and within federal lands, percentiles of the distributions of (i)...

  8. Identifying the Critical Links in Road Transportation Networks: Centrality-based approach utilizing structural properties

    Energy Technology Data Exchange (ETDEWEB)

    Chinthavali, Supriya [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-04-01

    Surface transportation road networks share structural properties similar to other complex networks (e.g., social networks, information networks, biological networks, and so on). This research investigates the structural properties of road networks for any possible correlation with the traffic characteristics such as link flows those determined independently. Additionally, we define a criticality index for the links of the road network that identifies the relative importance in the network. We tested our hypotheses with two sample road networks. Results show that, correlation exists between the link flows and centrality measures of a link of the road (dual graph approach is followed) and the criticality index is found to be effective for one test network to identify the vulnerable nodes.

  9. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems.

    Science.gov (United States)

    Shen, Lili; Guo, Jiming; Wang, Lei

    2018-06-06

    The network real-time kinematic (RTK) technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI), and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs), robotic equipment, etc.) require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC) approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC) according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS) data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  10. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  11. Large-Scale Brain Network Coupling Predicts Total Sleep Deprivation Effects on Cognitive Capacity.

    Directory of Open Access Journals (Sweden)

    Yu Lei

    Full Text Available Interactions between large-scale brain networks have received most attention in the study of cognitive dysfunction of human brain. In this paper, we aimed to test the hypothesis that the coupling strength of large-scale brain networks will reflect the pressure for sleep and will predict cognitive performance, referred to as sleep pressure index (SPI. Fourteen healthy subjects underwent this within-subject functional magnetic resonance imaging (fMRI study during rested wakefulness (RW and after 36 h of total sleep deprivation (TSD. Self-reported scores of sleepiness were higher for TSD than for RW. A subsequent working memory (WM task showed that WM performance was lower after 36 h of TSD. Moreover, SPI was developed based on the coupling strength of salience network (SN and default mode network (DMN. Significant increase of SPI was observed after 36 h of TSD, suggesting stronger pressure for sleep. In addition, SPI was significantly correlated with both the visual analogue scale score of sleepiness and the WM performance. These results showed that alterations in SN-DMN coupling might be critical in cognitive alterations that underlie the lapse after TSD. Further studies may validate the SPI as a potential clinical biomarker to assess the impact of sleep deprivation.

  12. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  13. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  14. APPLICATION OF UKRAINIAN GRID INFRASTRUCTURE FOR INVESTIGATION OF NONLINEAR DYNAMICS IN LARGE NEURONAL NETWORKS

    Directory of Open Access Journals (Sweden)

    O. О. Sudakov

    2015-12-01

    Full Text Available In present work the Ukrainian National Grid (UNG infrastructure was applied for investigation of synchronization in large networks of interacting neurons. This application is important for solving of modern neuroscience problems related to mechanisms of nervous system activities (memory, cognition etc. and nervous pathologies (epilepsy, Parkinsonism, etc.. Modern non-linear dynamics theories and applications provides powerful basis for computer simulations of biological neuronal networks and investigation of phenomena which mechanisms hardly could be clarified by other approaches. Cubic millimeter of brain tissue contains about 105 neurons, so realistic (Hodgkin-Huxley model and phenomenological (Kuramoto-Sakaguchi, FitzHugh-Nagumo, etc. models simulations require consideration of large neurons numbers.

  15. Exact computation and large angular momentum asymptotics of 3nj symbols: Semiclassical disentangling of spin networks

    International Nuclear Information System (INIS)

    Anderson, Roger W.; Aquilanti, Vincenzo; Silva Ferreira, Cristiane da

    2008-01-01

    Spin networks, namely, the 3nj symbols of quantum angular momentum theory and their generalizations to groups other than SU(2) and to quantum groups, permeate many areas of pure and applied science. The issues of their computation and characterization for large values of their entries are a challenge for diverse fields, such as spectroscopy and quantum chemistry, molecular and condensed matter physics, quantum computing, and the geometry of space time. Here we record progress both in their efficient calculation and in the study of the large j asymptotics. For the 9j symbol, a prototypical entangled network, we present and extensively check numerically formulas that illustrate the passage to the semiclassical limit, manifesting both the occurrence of disentangling and the discrete-continuum transition.

  16. Application of neural networks to software quality modeling of a very large telecommunications system.

    Science.gov (United States)

    Khoshgoftaar, T M; Allen, E B; Hudepohl, J P; Aud, S J

    1997-01-01

    Society relies on telecommunications to such an extent that telecommunications software must have high reliability. Enhanced measurement for early risk assessment of latent defects (EMERALD) is a joint project of Nortel and Bell Canada for improving the reliability of telecommunications software products. This paper reports a case study of neural-network modeling techniques developed for the EMERALD system. The resulting neural network is currently in the prototype testing phase at Nortel. Neural-network models can be used to identify fault-prone modules for extra attention early in development, and thus reduce the risk of operational problems with those modules. We modeled a subset of modules representing over seven million lines of code from a very large telecommunications software system. The set consisted of those modules reused with changes from the previous release. The dependent variable was membership in the class of fault-prone modules. The independent variables were principal components of nine measures of software design attributes. We compared the neural-network model with a nonparametric discriminant model and found the neural-network model had better predictive accuracy.

  17. Distributed and Cooperative Link Scheduling for Large-Scale Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Swami Ananthram

    2007-01-01

    Full Text Available A distributed and cooperative link-scheduling (DCLS algorithm is introduced for large-scale multihop wireless networks. With this algorithm, each and every active link in the network cooperatively calibrates its environment and converges to a desired link schedule for data transmissions within a time frame of multiple slots. This schedule is such that the entire network is partitioned into a set of interleaved subnetworks, where each subnetwork consists of concurrent cochannel links that are properly separated from each other. The desired spacing in each subnetwork can be controlled by a tuning parameter and the number of time slots specified for each frame. Following the DCLS algorithm, a distributed and cooperative power control (DCPC algorithm can be applied to each subnetwork to ensure a desired data rate for each link with minimum network transmission power. As shown consistently by simulations, the DCLS algorithm along with a DCPC algorithm yields significant power savings. The power savings also imply an increased feasible region of averaged link data rates for the entire network.

  18. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    Science.gov (United States)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  19. Distributed and Cooperative Link Scheduling for Large-Scale Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Ananthram Swami

    2007-12-01

    Full Text Available A distributed and cooperative link-scheduling (DCLS algorithm is introduced for large-scale multihop wireless networks. With this algorithm, each and every active link in the network cooperatively calibrates its environment and converges to a desired link schedule for data transmissions within a time frame of multiple slots. This schedule is such that the entire network is partitioned into a set of interleaved subnetworks, where each subnetwork consists of concurrent cochannel links that are properly separated from each other. The desired spacing in each subnetwork can be controlled by a tuning parameter and the number of time slots specified for each frame. Following the DCLS algorithm, a distributed and cooperative power control (DCPC algorithm can be applied to each subnetwork to ensure a desired data rate for each link with minimum network transmission power. As shown consistently by simulations, the DCLS algorithm along with a DCPC algorithm yields significant power savings. The power savings also imply an increased feasible region of averaged link data rates for the entire network.

  20. Social media networking in pediatric hydrocephalus: a point-prevalence analysis of utilization.

    Science.gov (United States)

    Elkarim, Ghassan Awad; Alotaibi, Naif M; Samuel, Nardin; Wang, Shelly; Ibrahim, George M; Fallah, Aria; Weil, Alexander G; Kulkarni, Abhaya V

    2017-08-01

    OBJECTIVE A recent survey has shown that caregivers of children with shunt-treated hydrocephalus frequently use social media networks for support and information gathering. The objective of this study is to describe and assess social media utilization among users interested in hydrocephalus. METHODS Publicly accessible accounts and videos dedicated to the topic of hydrocephalus were comprehensively searched across 3 social media platforms (Facebook, Twitter, and YouTube) throughout March 2016. Summary statistics were calculated on standard metrics of social media popularity. A categorization framework to describe the purpose of pages, groups, accounts, channels, and videos was developed following the screening of 100 titles. Categorized data were analyzed using nonparametric tests for statistical significance. RESULTS The authors' search identified 30 Facebook pages, 213 Facebook groups, 17 Twitter accounts, and 253 YouTube videos. These platforms were run by patients, caregivers, nonprofit foundations, and patient support groups. Most accounts were from the United States (n = 196), followed by the United Kingdom (n = 31), Canada (n = 17), India (n = 15), and Germany (n = 12). The earliest accounts were created in 2007, and a peak of 65 new accounts were created in 2011. The total number of users in Facebook pages exceeded those in Facebook groups (p social media use in the topic of hydrocephalus. Users interested in hydrocephalus seek privacy for support communications and are attracted to treatment procedure and surgical products videos. These findings provide insight into potential avenues of hydrocephalus outreach, support, or advocacy in social media.

  1. Utilization of peat procurement network for purchase of energy wood. Subproject

    International Nuclear Information System (INIS)

    Kiukaanniemi, E.; Tervo, M.

    1998-01-01

    The objective of the project is to investigate and develop the energy wood procurement to the mire-terminals for production of mixed fuels, carried out by the peat contractors and forest machine entrepreneurs. The investigation of the costs of the chips produced for mixed fuels, the deviation of them and the possibilities to reduce them form the main part of the project. The duration of the project is two years, and it started in the summer 1997. Procurement of energy wood, carried out by forest machine and peat entrepreneurs, to the bog terminals for production of mixed fuels by the side of peat, will be studied in the project both experimentally and calculationally. The utilization of peat procurement network for energy wood procurement will mainly be studied. Costs and the harvesting logistics will be estimated using the software developed in the research. The project is divided into five sub-tasks: (1) survey on the contractor and machine needs of the experimental work; (2) selection of entrepreneurs and the harvesting sites; (3) practical harvesting experiments; (4) development of the cost calculation software; (5) analysis and reporting of the results

  2. Tradeoffs between quality-of-control and quality-of-service in large-scale nonlinear networked control systems

    NARCIS (Netherlands)

    Borgers, D. P.; Geiselhart, R.; Heemels, W. P. M. H.

    2017-01-01

    In this paper we study input-to-state stability (ISS) of large-scale networked control systems (NCSs) in which sensors, controllers and actuators are connected via multiple (local) communication networks which operate asynchronously and independently of each other. We model the large-scale NCS as an

  3. Memory Transmission in Small Groups and Large Networks: An Agent-Based Model.

    Science.gov (United States)

    Luhmann, Christian C; Rajaram, Suparna

    2015-12-01

    The spread of social influence in large social networks has long been an interest of social scientists. In the domain of memory, collaborative memory experiments have illuminated cognitive mechanisms that allow information to be transmitted between interacting individuals, but these experiments have focused on small-scale social contexts. In the current study, we took a computational approach, circumventing the practical constraints of laboratory paradigms and providing novel results at scales unreachable by laboratory methodologies. Our model embodied theoretical knowledge derived from small-group experiments and replicated foundational results regarding collaborative inhibition and memory convergence in small groups. Ultimately, we investigated large-scale, realistic social networks and found that agents are influenced by the agents with which they interact, but we also found that agents are influenced by nonneighbors (i.e., the neighbors of their neighbors). The similarity between these results and the reports of behavioral transmission in large networks offers a major theoretical insight by linking behavioral transmission to the spread of information. © The Author(s) 2015.

  4. A hydrogeomorphic river network model predicts where and why hyporheic exchange is important in large basins

    Science.gov (United States)

    Gomez-Velez, Jesus D.; Harvey, Judson W.

    2014-09-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.

  5. Asymptotic Analysis of Large Cooperative Relay Networks Using Random Matrix Theory

    Directory of Open Access Journals (Sweden)

    H. Poor

    2008-04-01

    Full Text Available Cooperative transmission is an emerging communication technology that takes advantage of the broadcast nature of wireless channels. In cooperative transmission, the use of relays can create a virtual antenna array so that multiple-input/multiple-output (MIMO techniques can be employed. Most existing work in this area has focused on the situation in which there are a small number of sources and relays and a destination. In this paper, cooperative relay networks with large numbers of nodes are analyzed, and in particular the asymptotic performance improvement of cooperative transmission over direction transmission and relay transmission is analyzed using random matrix theory. The key idea is to investigate the eigenvalue distributions related to channel capacity and to analyze the moments of this distribution in large wireless networks. A performance upper bound is derived, the performance in the low signal-to-noise-ratio regime is analyzed, and two approximations are obtained for high and low relay-to-destination link qualities, respectively. Finally, simulations are provided to validate the accuracy of the analytical results. The analysis in this paper provides important tools for the understanding and the design of large cooperative wireless networks.

  6. A hydrogeomorphic river network model predicts where and why hyporheic exchange is important in large basins

    Science.gov (United States)

    Gomez-Velez, Jesus D.; Harvey, Judson

    2014-01-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data and by models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bed forms rather than lateral exchange through meanders dominates hyporheic fluxes and turnover rates along river corridors. Per kilometer, low-order streams have a biogeochemical potential at least 2 orders of magnitude larger than higher-order streams. However, when biogeochemical potential is examined per average length of each stream order, low- and high-order streams were often found to be comparable. As a result, the hyporheic zone's intrinsic potential for biogeochemical transformations is comparable across different stream orders, but the greater river miles and larger total streambed area of lower order streams result in the highest cumulative impact from low-order streams. Lateral exchange through meander banks may be important in some cases but generally only in large rivers.

  7. A robust and high-performance queue management controller for large round trip time networks

    Science.gov (United States)

    Khoshnevisan, Ladan; Salmasi, Farzad R.

    2016-05-01

    Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.

  8. The Value of Sustainable Knowledge Transfer Methods for SMEs, Utilizing Socio-Technical Networks and Complex Systems

    Directory of Open Access Journals (Sweden)

    Susu Nousala

    2010-12-01

    Full Text Available This paper will examine the development of sustainable SME methods for tracking tacit (informal knowledge transfer as a series of networks of larger complex system. Understanding sustainable systems begins with valuing tacit knowledge networks and their ability to produce connections on multiple levels. The behaviour of the social or socio aspects of a system in relation to the explicit formal/physical structures need to be understood and actively considered when utilizing methodologies for interacting within complex systems structures. This paper utilizes theory from several previous studies to underpin the key case study discussed. This approach involved examining the behavioural phenomena of an SME knowledge network. The knowledge network elements were highlighted to identify their value within an SME structure. To understand the value of these emergent elements from between tacit and explicit knowledge networks, is to actively, simultaneously and continuous support sustainable development for SME organizations. The simultaneous links within and between groups of organizations is crucial for understanding sustainable networking structures of complex systems.

  9. Large-scale changes in network interactions as a physiological signature of spatial neglect.

    Science.gov (United States)

    Baldassarre, Antonello; Ramsey, Lenny; Hacker, Carl L; Callejas, Alicia; Astafiev, Serguei V; Metcalf, Nicholas V; Zinn, Kristi; Rengachary, Jennifer; Snyder, Abraham Z; Carter, Alex R; Shulman, Gordon L; Corbetta, Maurizio

    2014-12-01

    The relationship between spontaneous brain activity and behaviour following focal injury is not well understood. Here, we report a large-scale study of resting state functional connectivity MRI and spatial neglect following stroke in a large (n=84) heterogeneous sample of first-ever stroke patients (within 1-2 weeks). Spatial neglect, which is typically more severe after right than left hemisphere injury, includes deficits of spatial attention and motor actions contralateral to the lesion, and low general attention due to impaired vigilance/arousal. Patients underwent structural and resting state functional MRI scans, and spatial neglect was measured using the Posner spatial cueing task, and Mesulam and Behavioural Inattention Test cancellation tests. A principal component analysis of the behavioural tests revealed a main factor accounting for 34% of variance that captured three correlated behavioural deficits: visual neglect of the contralesional visual field, visuomotor neglect of the contralesional field, and low overall performance. In an independent sample (21 healthy subjects), we defined 10 resting state networks consisting of 169 brain regions: visual-fovea and visual-periphery, sensory-motor, auditory, dorsal attention, ventral attention, language, fronto-parietal control, cingulo-opercular control, and default mode. We correlated the neglect factor score with the strength of resting state functional connectivity within and across the 10 resting state networks. All damaged brain voxels were removed from the functional connectivity:behaviour correlational analysis. We found that the correlated behavioural deficits summarized by the factor score were associated with correlated multi-network patterns of abnormal functional connectivity involving large swaths of cortex. Specifically, dorsal attention and sensory-motor networks showed: (i) reduced interhemispheric functional connectivity; (ii) reduced anti-correlation with fronto-parietal and default mode

  10. Constant load supports attenuating shocks and vibrations for networks of pipes submitted to large thermal dilatation

    International Nuclear Information System (INIS)

    Prisecaru, Ilie; Panait; Adrian; Serban, Viorel; Ciocan, George; Androne, Marian; Florea, Ioana; State, Elena

    2004-01-01

    Full text: To avoid some drawbacks in the classical supports employed currently in networks of pipes it was conceived, designed, built and experimentally tested a new type of constant load supports which attenuate largely the shocks and vibrations for networks of pipes subjected to large thermal dilatation. These supports are particularly needed for solving the severe problems of the vibrations in networks of pipes in thermoelectric stations, nuclear power plants, or heavy water production plants. These supports allow building networks of new types, more reliable and of lower cost. The new type of support was developed on the basis of a number of patents protected by OSIM. It has a simple structure, ensures a secure functioning without blocking or other kinds of failures and is resistant to a very large variety of stresses. The new type of support of constant load avoids the drawbacks in classical supports i.e. the stress/deformation diagram is practically independent of stress level. The characteristic of the support is geometrically non-linear and presents a plateau with a small slope over a rather large deformation range which results from a serially mounted structure of sandwiches the deformation of which is controlled by a system of deforming central and peripheral pieces. The new supports of constant load, called SERB-PIPE, present a controlled elasticity and a high degree of damping as the package of elastic blades (the sandwich structure) is made of two sub-packages with relative movements what ensure the attenuation of the shocks and vibrations produced by the fluid flow within the pipes and or by seismic motions. By contrast with classical supports, the new supports have a simple structure and a high reliability. Breakdown under stress leading to severe changes in the stress distribution in pipe networks, which could generate overloads in pipes and over-loading in other supports, cannot occur. One can also mention that these supports can be built in a

  11. Large-scale fabrication and utilization of novel hexagonal/turbostratic composite boron nitride nanosheets

    KAUST Repository

    Zhong, Bo

    2017-02-15

    In this report, we have developed a scalable approach to massive synthesis of hexagonal/turbostratic composite boron nitride nanosheets (h/t-BNNSs). The strikingly effective, reliable, and high-throughput (grams) synthesis is performed via a facile chemical foaming process at 1400°C utilizing ammonia borane (AB) as precursor. The characterization results demonstrate that high quality of h/t-BNNSs with lateral size of tens of micrometers and thickness of tens of nanometers are obtained. The growth mechanism of h/t-BNNSs is also discussed based on the thermogravimetric analysis of AB which clearly shows two step weight loss. The h/t-BNNSs are further used for making thermoconductive h/t-BNNSs/epoxy resin composites. The thermal conductivity of the composites is obviously improved due to the introduction of h/t-BNNSs. Consideration of the unique properties of boron nitride, these novel h/t-BNNSs are envisaged to be very valuable for future high performance polymer based material fabrication.

  12. Utilizing Electric Vehicles to Assist Integration of Large Penetrations of Distributed Photovoltaic Generation Capacity

    Energy Technology Data Exchange (ETDEWEB)

    Tuffner, Francis K.; Chassin, Forrest S.; Kintner-Meyer, Michael CW; Gowri, Krishnan

    2012-11-30

    Executive Summary Introduction and Motivation This analysis provides the first insights into the leveraging potential of distributed photovoltaic (PV) technologies on rooftop and electric vehicle (EV) charging. Either of the two technologies by themselves - at some high penetrations – may cause some voltage control challenges or overloading problems, respectively. But when combined, there – at least intuitively – could be synergistic effects, whereby one technology mitigates the negative impacts of the other. High penetration of EV charging may overload existing distribution system components, most prominently the secondary transformer. If PV technology is installed at residential premises or anywhere downstream of the secondary transformer, it will provide another electricity source thus, relieving the loading on the transformers. Another synergetic or mitigating effect could be envisioned when high PV penetration reverts the power flow upward in the distribution system (from the homes upstream into the distribution system). Protection schemes may then no longer work and voltage violation (exceeding the voltage upper limited of the ANSI voltage range) may occur. In this particular situation, EV charging could absorb the electricity from the PV, such that the reversal of power flow can be reduced or alleviated. Given these potential mutual synergistic behaviors of PV and EV technologies, this project attempted to quantify the benefits of combining the two technologies. Furthermore, of interest was how advanced EV control strategies may influence the outcome of the synergy between EV charging and distributed PV installations. Particularly, Californian utility companies with high penetration of the distributed PV technology, who have experienced voltage control problems, are interested how intelligent EV charging could support or affect the voltage control

  13. Effects of Data Replication on Data Exfiltration in Mobile Ad Hoc Networks Utilizing Reactive Protocols

    Science.gov (United States)

    2015-03-01

    aerial vehicle VANET vehicular ad hoc network VoIP Voice over Internet Protocol VRR Virtual Ring Routing xiii EFFECTS OF DATA REPLICATION ON DATA...ad hoc networks in mobile ad hoc networks (MANETs), vehicular ad hoc networks (VANETs), and flying ad hoc net- works (FANETs). Much of the research...metric, such as capacity, congestion , power, or combinations thereof. Caro refers to two different types of ants, FANTs and BANTs which are analogous to

  14. An Examination of Research Collaboration in Psychometrics Utilizing Social Network Analysis Methods

    Science.gov (United States)

    DiCrecchio, Nicole C.

    2016-01-01

    Co-authorship networks have been studied in many fields as a way to understand collaboration patterns. However, a comprehensive exploration of the psychometrics field has not been conducted. Also, few studies on co-author networks have included longitudinal analyses as well as data on the characteristics of authors in the network. Including both…

  15. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    Science.gov (United States)

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  16. Cardinality Estimation Algorithm in Large-Scale Anonymous Wireless Sensor Networks

    KAUST Repository

    Douik, Ahmed

    2017-08-30

    Consider a large-scale anonymous wireless sensor network with unknown cardinality. In such graphs, each node has no information about the network topology and only possesses a unique identifier. This paper introduces a novel distributed algorithm for cardinality estimation and topology discovery, i.e., estimating the number of node and structure of the graph, by querying a small number of nodes and performing statistical inference methods. While the cardinality estimation allows the design of more efficient coding schemes for the network, the topology discovery provides a reliable way for routing packets. The proposed algorithm is shown to produce a cardinality estimate proportional to the best linear unbiased estimator for dense graphs and specific running times. Simulation results attest the theoretical results and reveal that, for a reasonable running time, querying a small group of nodes is sufficient to perform an estimation of 95% of the whole network. Applications of this work include estimating the number of Internet of Things (IoT) sensor devices, online social users, active protein cells, etc.

  17. A large scale code resolution service network in the Internet of Things.

    Science.gov (United States)

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-11-07

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  18. A Large Scale Code Resolution Service Network in the Internet of Things

    Directory of Open Access Journals (Sweden)

    Xiangzhan Yu

    2012-11-01

    Full Text Available In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.

  19. A Large Scale Code Resolution Service Network in the Internet of Things

    Science.gov (United States)

    Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan

    2012-01-01

    In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207

  20. Convolutional neural networks for transient candidate vetting in large-scale surveys

    Science.gov (United States)

    Gieseke, Fabian; Bloemen, Steven; van den Bogaard, Cas; Heskes, Tom; Kindler, Jonas; Scalzo, Richard A.; Ribeiro, Valério A. R. M.; van Roestel, Jan; Groot, Paul J.; Yuan, Fang; Möller, Anais; Tucker, Brad E.

    2017-12-01

    Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive use of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional pre-processing steps - eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3 per cent of all 'real' and 99.7 per cent of all 'bogus' instances on a test set containing 1942 'bogus' and 227 'real' instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.

  1. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  2. Generalized Cartographic and Simultaneous Representation of Utility Networks for Decision-Support Systems and Crisis Management in Urban Environments

    Science.gov (United States)

    Becker, T.; König, G.

    2015-10-01

    Cartographic visualizations of crises are used to create a Common Operational Picture (COP) and enforce Situational Awareness by presenting relevant information to the involved actors. As nearly all crises affect geospatial entities, geo-data representations have to support location-specific analysis throughout the decision-making process. Meaningful cartographic presentation is needed for coordinating the activities of crisis manager in a highly dynamic situation, since operators' attention span and their spatial memories are limiting factors during the perception and interpretation process. Situational Awareness of operators in conjunction with a COP are key aspects in decision-making process and essential for making well thought-out and appropriate decisions. Considering utility networks as one of the most complex and particularly frequent required systems in urban environment, meaningful cartographic presentation of multiple utility networks with respect to disaster management do not exist. Therefore, an optimized visualization of utility infrastructure for emergency response procedures is proposed. The article will describe a conceptual approach on how to simplify, aggregate, and visualize multiple utility networks and their components to meet the requirements of the decision-making process and to support Situational Awareness.

  3. A large scale analysis of information-theoretic network complexity measures using chemical structures.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available This paper aims to investigate information-theoretic network complexity measures which have already been intensely used in mathematical- and medicinal chemistry including drug design. Numerous such measures have been developed so far but many of them lack a meaningful interpretation, e.g., we want to examine which kind of structural information they detect. Therefore, our main contribution is to shed light on the relatedness between some selected information measures for graphs by performing a large scale analysis using chemical networks. Starting from several sets containing real and synthetic chemical structures represented by graphs, we study the relatedness between a classical (partition-based complexity measure called the topological information content of a graph and some others inferred by a different paradigm leading to partition-independent measures. Moreover, we evaluate the uniqueness of network complexity measures numerically. Generally, a high uniqueness is an important and desirable property when designing novel topological descriptors having the potential to be applied to large chemical databases.

  4. Direction of information flow in large-scale resting-state networks is frequency-dependent.

    Science.gov (United States)

    Hillebrand, Arjan; Tewarie, Prejaas; van Dellen, Edwin; Yu, Meichen; Carbo, Ellen W S; Douw, Linda; Gouw, Alida A; van Straaten, Elisabeth C W; Stam, Cornelis J

    2016-04-05

    Normal brain function requires interactions between spatially separated, and functionally specialized, macroscopic regions, yet the directionality of these interactions in large-scale functional networks is unknown. Magnetoencephalography was used to determine the directionality of these interactions, where directionality was inferred from time series of beamformer-reconstructed estimates of neuronal activation, using a recently proposed measure of phase transfer entropy. We observed well-organized posterior-to-anterior patterns of information flow in the higher-frequency bands (alpha1, alpha2, and beta band), dominated by regions in the visual cortex and posterior default mode network. Opposite patterns of anterior-to-posterior flow were found in the theta band, involving mainly regions in the frontal lobe that were sending information to a more distributed network. Many strong information senders in the theta band were also frequent receivers in the alpha2 band, and vice versa. Our results provide evidence that large-scale resting-state patterns of information flow in the human brain form frequency-dependent reentry loops that are dominated by flow from parieto-occipital cortex to integrative frontal areas in the higher-frequency bands, which is mirrored by a theta band anterior-to-posterior flow.

  5. Submarine canyons represent an essential habitat network for krill hotspots in a Large Marine Ecosystem.

    Science.gov (United States)

    Santora, Jarrod A; Zeno, Ramona; Dorman, Jeffrey G; Sydeman, William J

    2018-05-15

    Submarine canyon systems are ubiquitous features of marine ecosystems, known to support high levels of biodiversity. Canyons may be important to benthic-pelagic ecosystem coupling, but their role in concentrating plankton and structuring pelagic communities is not well known. We hypothesize that at the scale of a large marine ecosystem, canyons provide a critical habitat network, which maintain energy flow and trophic interactions. We evaluate canyon characteristics relative to the distribution and abundance of krill, critically important prey in the California Current Ecosystem. Using a geological database, we conducted a census of canyon locations, evaluated their dimensions, and quantified functional relationships with krill hotspots (i.e., sites of persistently elevated abundance) derived from hydro-acoustic surveys. We found that 76% of krill hotspots occurred within and adjacent to canyons. Most krill hotspots were associated with large shelf-incising canyons. Krill hotspots and canyon dimensions displayed similar coherence as a function of latitude and indicate a potential regional habitat network. The latitudinal migration of many fish, seabirds and mammals may be enhanced by using this canyon-krill network to maintain foraging opportunities. Biogeographic assessments and predictions of krill and krill-predator distributions under climate change may be improved by accounting for canyons in habitat models.

  6. Network connectivity paradigm for the large data produced by weather radar systems

    Science.gov (United States)

    Guenzi, Diego; Bechini, Renzo; Boraso, Rodolfo; Cremonini, Roberto; Fratianni, Simona

    2014-05-01

    The traffic over Internet is constantly increasing; this is due in particular to social networks activities but also to the enormous exchange of data caused especially by the so-called "Internet of Things". With this term we refer to every device that has the capability of exchanging information with other devices on the web. In geoscience (and, in particular, in meteorology and climatology) there is a constantly increasing number of sensors that are used to obtain data from different sources (like weather radars, digital rain gauges, etc.). This information-gathering activity, frequently, must be followed by a complex data analysis phase, especially when we have large data sets that can be very difficult to analyze (very long historical series of large data sets, for example), like the so called big data. These activities are particularly intensive in resource consumption and they lead to new computational models (like cloud computing) and new methods for storing data (like object store, linked open data, NOSQL or NewSQL). The weather radar systems can be seen as one of the sensors mentioned above: it transmit a large amount of raw data over the network (up to 40 megabytes every five minutes), with 24h/24h continuity and in any weather condition. Weather radar are often located in peaks and in wild areas where connectivity is poor. For this reason radar measurements are sometimes processed partially on site and reduced in size to adapt them to the limited bandwidth currently available by data transmission systems. With the aim to preserve the maximum flow of information, an innovative network connectivity paradigm for the large data produced by weather radar system is here presented. The study is focused on the Monte Settepani operational weather radar system, located over a wild peak summit in north-western Italy.

  7. Very large virtual compound spaces: construction, storage and utility in drug discovery.

    Science.gov (United States)

    Peng, Zhengwei

    2013-09-01

    Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.

  8. The Effects of Topology on Throughput Capacity of Large Scale Wireless Networks

    Directory of Open Access Journals (Sweden)

    Qiuming Liu

    2017-03-01

    Full Text Available In this paper, we jointly consider the inhomogeneity and spatial dimension in large scale wireless networks. We study the effects of topology on the throughput capacity. This problem is inherently difficult since it is complex to handle the interference caused by simultaneous transmission. To solve this problem, we, according to the inhomogeneity of topology, divide the transmission into intra-cluster transmission and inter-cluster transmission. For the intra-cluster transmission, a spheroidal percolation model is constructed. The spheroidal percolation model guarantees a constant rate when a power control strategy is adopted. We also propose a cube percolation mode for the inter-cluster transmission. Different from the spheroidal percolation model, a constant transmission rate can be achieved without power control. For both transmissions, we propose a routing scheme with five phases. By comparing the achievable rate of each phase, we get the rate bottleneck, which is the throughput capacity of the network.

  9. Global asymptotic stabilization of large-scale hydraulic networks using positive proportional controls

    DEFF Research Database (Denmark)

    Jensen, Tom Nørgaard; Wisniewski, Rafal

    2014-01-01

    An industrial case study involving a large-scale hydraulic network underlying a district heating system subject to structural changes is considered. The problem of controlling the pressure drop across the so-called end-user valves in the network to a designated vector of reference values under...... directional actuator constraints is addressed. The proposed solution consists of a set of decentralized positively constrained proportional control actions. The results show that the closed-loop system always has a globally asymptotically stable equilibrium point independently on the number of end......-users. Furthermore, by a proper design of controller gains the closed-loop equilibrium point can be designed to belong to an arbitrarily small neighborhood of the desired equilibrium point. Since there exists a globally asymptotically stable equilibrium point independently on the number of end-users in the system...

  10. Biomass Energy for Transport and Electricity: Large scale utilization under low CO2 concentration scenarios

    Energy Technology Data Exchange (ETDEWEB)

    Luckow, Patrick; Wise, Marshall A.; Dooley, James J.; Kim, Son H.

    2010-01-25

    This paper examines the potential role of large scale, dedicated commercial biomass energy systems under global climate policies designed to stabilize atmospheric concentrations of CO2 at 400ppm and 450ppm. We use an integrated assessment model of energy and agriculture systems to show that, given a climate policy in which terrestrial carbon is appropriately valued equally with carbon emitted from the energy system, biomass energy has the potential to be a major component of achieving these low concentration targets. The costs of processing and transporting biomass energy at much larger scales than current experience are also incorporated into the modeling. From the scenario results, 120-160 EJ/year of biomass energy is produced by midcentury and 200-250 EJ/year by the end of this century. In the first half of the century, much of this biomass is from agricultural and forest residues, but after 2050 dedicated cellulosic biomass crops become the dominant source. A key finding of this paper is the role that carbon dioxide capture and storage (CCS) technologies coupled with commercial biomass energy can play in meeting stringent emissions targets. Despite the higher technology costs of CCS, the resulting negative emissions used in combination with biomass are a very important tool in controlling the cost of meeting a target, offsetting the venting of CO2 from sectors of the energy system that may be more expensive to mitigate, such as oil use in transportation. The paper also discusses the role of cellulosic ethanol and Fischer-Tropsch biomass derived transportation fuels and shows that both technologies are important contributors to liquid fuels production, with unique costs and emissions characteristics. Through application of the GCAM integrated assessment model, it becomes clear that, given CCS availability, bioenergy will be used both in electricity and transportation.

  11. Datum maintenance of the main Egyptian geodetic control networks by utilizing Precise Point Positioning “PPP” technique

    Directory of Open Access Journals (Sweden)

    Mostafa Rabah

    2016-06-01

    To see how non-performing maintenance degrading the values of the HARN and NACN, the available HARN and NACN stations in the Nile Delta were observed. The Processing of the tested part was done by CSRS-PPP Service based on utilizing Precise Point Positioning “PPP” and Trimble Business Center “TBC”. The study shows the feasibility of Precise Point Positioning in updating the absolute positioning of the HARN network and its role in updating the reference frame (ITRF. The study also confirmed the necessity of the absent role of datum maintenance of Egypt networks.

  12. Differences between child and adult large-scale functional brain networks for reading tasks.

    Science.gov (United States)

    Liu, Xin; Gao, Yue; Di, Qiqi; Hu, Jiali; Lu, Chunming; Nan, Yun; Booth, James R; Liu, Li

    2018-02-01

    Reading is an important high-level cognitive function of the human brain, requiring interaction among multiple brain regions. Revealing differences between children's large-scale functional brain networks for reading tasks and those of adults helps us to understand how the functional network changes over reading development. Here we used functional magnetic resonance imaging data of 17 adults (19-28 years old) and 16 children (11-13 years old), and graph theoretical analyses to investigate age-related changes in large-scale functional networks during rhyming and meaning judgment tasks on pairs of visually presented Chinese characters. We found that: (1) adults had stronger inter-regional connectivity and nodal degree in occipital regions, while children had stronger inter-regional connectivity in temporal regions, suggesting that adults rely more on visual orthographic processing whereas children rely more on auditory phonological processing during reading. (2) Only adults showed between-task differences in inter-regional connectivity and nodal degree, whereas children showed no task differences, suggesting the topological organization of adults' reading network is more specialized. (3) Children showed greater inter-regional connectivity and nodal degree than adults in multiple subcortical regions; the hubs in children were more distributed in subcortical regions while the hubs in adults were more distributed in cortical regions. These findings suggest that reading development is manifested by a shift from reliance on subcortical to cortical regions. Taken together, our study suggests that Chinese reading development is supported by developmental changes in brain connectivity properties, and some of these changes may be domain-general while others may be specific to the reading domain. © 2017 Wiley Periodicals, Inc.

  13. A Self-Organizing Spatial Clustering Approach to Support Large-Scale Network RTK Systems

    Directory of Open Access Journals (Sweden)

    Lili Shen

    2018-06-01

    Full Text Available The network real-time kinematic (RTK technique can provide centimeter-level real time positioning solutions and play a key role in geo-spatial infrastructure. With ever-increasing popularity, network RTK systems will face issues in the support of large numbers of concurrent users. In the past, high-precision positioning services were oriented towards professionals and only supported a few concurrent users. Currently, precise positioning provides a spatial foundation for artificial intelligence (AI, and countless smart devices (autonomous cars, unmanned aerial-vehicles (UAVs, robotic equipment, etc. require precise positioning services. Therefore, the development of approaches to support large-scale network RTK systems is urgent. In this study, we proposed a self-organizing spatial clustering (SOSC approach which automatically clusters online users to reduce the computational load on the network RTK system server side. The experimental results indicate that both the SOSC algorithm and the grid algorithm can reduce the computational load efficiently, while the SOSC algorithm gives a more elastic and adaptive clustering solution with different datasets. The SOSC algorithm determines the cluster number and the mean distance to cluster center (MDTCC according to the data set, while the grid approaches are all predefined. The side-effects of clustering algorithms on the user side are analyzed with real global navigation satellite system (GNSS data sets. The experimental results indicate that 10 km can be safely used as the cluster radius threshold for the SOSC algorithm without significantly reducing the positioning precision and reliability on the user side.

  14. Word-Length Correlations and Memory in Large Texts: A Visibility Network Analysis

    Directory of Open Access Journals (Sweden)

    Lev Guzmán-Vargas

    2015-11-01

    Full Text Available We study the correlation properties of word lengths in large texts from 30 ebooks in the English language from the Gutenberg Project (www.gutenberg.org using the natural visibility graph method (NVG. NVG converts a time series into a graph and then analyzes its graph properties. First, the original sequence of words is transformed into a sequence of values containing the length of each word, and then, it is integrated. Next, we apply the NVG to the integrated word-length series and construct the network. We show that the degree distribution of that network follows a power law, P ( k ∼ k - γ , with two regimes, which are characterized by the exponents γ s ≈ 1 . 7 (at short degree scales and γ l ≈ 1 . 3 (at large degree scales. This suggests that word lengths are much more strongly correlated at large distances between words than at short distances between words. That finding is also supported by the detrended fluctuation analysis (DFA and recurrence time distribution. These results provide new information about the universal characteristics of the structure of written texts beyond that given by word frequencies.

  15. Heave motion prediction of a large barge in random seas by using artificial neural network

    Science.gov (United States)

    Lee, Hsiu Eik; Liew, Mohd Shahir; Zawawi, Noor Amila Wan Abdullah; Toloue, Iraj

    2017-11-01

    This paper describes the development of a multi-layer feed forward artificial neural network (ANN) to predict rigid heave body motions of a large catenary moored barge subjected to multi-directional irregular waves. The barge is idealized as a rigid plate of finite draft with planar dimensions 160m (length) and 100m (width) which is held on station using a six point chain catenary mooring in 50m water depth. Hydroelastic effects are neglected from the physical model as the chief intent of this study is focused on large plate rigid body hydrodynamics modelling using ANN. Even with this assumption, the computational requirements for time domain coupled hydrodynamic simulations of a moored floating body is considerably costly, particularly if a large number of simulations are required such as in the case of response based design (RBD) methods. As an alternative to time consuming numerical hydrodynamics, a regression-type ANN model has been developed for efficient prediction of the barge's heave responses to random waves from various directions. It was determined that a network comprising of 3 input features, 2 hidden layers with 5 neurons each and 1 output was sufficient to produce acceptable predictions within 0.02 mean squared error. By benchmarking results from the ANN with those generated by a fully coupled dynamic model in OrcaFlex, it is demonstrated that the ANN is capable of predicting the barge's heave responses with acceptable accuracy.

  16. A methodology for the synthesis of heat exchanger networks having large numbers of uncertain parameters

    International Nuclear Information System (INIS)

    Novak Pintarič, Zorka; Kravanja, Zdravko

    2015-01-01

    This paper presents a robust computational methodology for the synthesis and design of flexible HEN (Heat Exchanger Networks) having large numbers of uncertain parameters. This methodology combines several heuristic methods which progressively lead to a flexible HEN design at a specific level of confidence. During the first step, a HEN topology is generated under nominal conditions followed by determining those points critical for flexibility. A significantly reduced multi-scenario model for flexible HEN design is formulated at the nominal point with the flexibility constraints at the critical points. The optimal design obtained is tested by stochastic Monte Carlo optimization and the flexibility index through solving one-scenario problems within a loop. This presented methodology is novel regarding the enormous reduction of scenarios in HEN design problems, and computational effort. Despite several simplifications, the capability of designing flexible HENs with large numbers of uncertain parameters, which are typical throughout industry, is not compromised. An illustrative case study is presented for flexible HEN synthesis comprising 42 uncertain parameters. - Highlights: • Methodology for HEN (Heat Exchanger Network) design under uncertainty is presented. • The main benefit is solving HENs having large numbers of uncertain parameters. • Drastically reduced multi-scenario HEN design problem is formulated through several steps. • Flexibility of HEN is guaranteed at a specific level of confidence.

  17. Suspended sediment transport trough a large fluvial-tidal channel network

    Science.gov (United States)

    Wright, Scott A.; Morgan-King, Tara L.

    2015-01-01

    The confluence of the Sacramento and San Joaquin Rivers, CA, forms a large network of interconnected channels, referred to as the Sacramento-San Joaquin Delta (the Delta). The Delta comprises the transition zone from the fluvial influences of the upstream rivers and tidal influences of San Francisco Bay downstream. Formerly an extensive tidal marsh, the hydrodynamics and geomorphology of Delta have been substantially modified by humans to support agriculture, navigation, and water supply. These modifications, including construction of new channels, diking and draining of tidal wetlands, dredging of navigation channels, and the operation of large pumping facilities for distribution of freshwater from the Delta to other parts of the state, have had a dramatic impact on the physical and ecological processes within the Delta. To better understand the current physical processes, and their linkages to ecological processes, the USGS maintains an extensive network of flow, sediment, and water quality gages in the Delta. Flow gaging is accomplished through use of the index-velocity method, and sediment monitoring uses turbidity as a surrogate for suspended-sediment concentration. Herein, we present analyses of the transport and dispersal of suspended sediment through the complex network of channels in the Delta. The primary source of sediment to the Delta is the Sacramento River, which delivers pulses of sediment primarily during winter and spring runoff events. Upon reaching the Delta, the sediment pulses move through the fluvial-tidal transition while also encountering numerous channel junctions as the Sacramento River branches into several distributary channels. The monitoring network allows us to track these pulses through the network and document the dominant transport pathways for suspended sediment. Further, the flow gaging allows for an assessment of the relative effects of advection (the fluvial signal) and dispersion (from the tides) on the sediment pulses as they

  18. An improvement of tree-Rule firewall for a large network: supporting large rule size and low delay

    NARCIS (Netherlands)

    Chomsiri, Thawatchai; He, Xiangjian; Nanda, Priyadarsi; Tan, Zhiyuan

    Firewalls are important network devices which provide first hand defense against network threat. This level of defense is depended on firewall rules. Traditional firewalls, i.e., Cisco ACL, IPTABLES, Check Point and Juniper NetScreen firewall use listed rule to regulate packet flows. However, the

  19. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Directory of Open Access Journals (Sweden)

    Xiangyun Xiao

    Full Text Available The reconstruction of gene regulatory networks (GRNs from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM, experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  20. A new asynchronous parallel algorithm for inferring large-scale gene regulatory networks.

    Science.gov (United States)

    Xiao, Xiangyun; Zhang, Wei; Zou, Xiufen

    2015-01-01

    The reconstruction of gene regulatory networks (GRNs) from high-throughput experimental data has been considered one of the most important issues in systems biology research. With the development of high-throughput technology and the complexity of biological problems, we need to reconstruct GRNs that contain thousands of genes. However, when many existing algorithms are used to handle these large-scale problems, they will encounter two important issues: low accuracy and high computational cost. To overcome these difficulties, the main goal of this study is to design an effective parallel algorithm to infer large-scale GRNs based on high-performance parallel computing environments. In this study, we proposed a novel asynchronous parallel framework to improve the accuracy and lower the time complexity of large-scale GRN inference by combining splitting technology and ordinary differential equation (ODE)-based optimization. The presented algorithm uses the sparsity and modularity of GRNs to split whole large-scale GRNs into many small-scale modular subnetworks. Through the ODE-based optimization of all subnetworks in parallel and their asynchronous communications, we can easily obtain the parameters of the whole network. To test the performance of the proposed approach, we used well-known benchmark datasets from Dialogue for Reverse Engineering Assessments and Methods challenge (DREAM), experimentally determined GRN of Escherichia coli and one published dataset that contains more than 10 thousand genes to compare the proposed approach with several popular algorithms on the same high-performance computing environments in terms of both accuracy and time complexity. The numerical results demonstrate that our parallel algorithm exhibits obvious superiority in inferring large-scale GRNs.

  1. An efficient method based on the uniformity principle for synthesis of large-scale heat exchanger networks

    International Nuclear Information System (INIS)

    Zhang, Chunwei; Cui, Guomin; Chen, Shang

    2016-01-01

    Highlights: • Two dimensionless uniformity factors are presented to heat exchange network. • The grouping of process streams reduces the computational complexity of large-scale HENS problems. • The optimal sub-network can be obtained by Powell particle swarm optimization algorithm. • The method is illustrated by a case study involving 39 process streams, with a better solution. - Abstract: The optimal design of large-scale heat exchanger networks is a difficult task due to the inherent non-linear characteristics and the combinatorial nature of heat exchangers. To solve large-scale heat exchanger network synthesis (HENS) problems, two dimensionless uniformity factors to describe the heat exchanger network (HEN) uniformity in terms of the temperature difference and the accuracy of process stream grouping are deduced. Additionally, a novel algorithm that combines deterministic and stochastic optimizations to obtain an optimal sub-network with a suitable heat load for a given group of streams is proposed, and is named the Powell particle swarm optimization (PPSO). As a result, the synthesis of large-scale heat exchanger networks is divided into two corresponding sub-parts, namely, the grouping of process streams and the optimization of sub-networks. This approach reduces the computational complexity and increases the efficiency of the proposed method. The robustness and effectiveness of the proposed method are demonstrated by solving a large-scale HENS problem involving 39 process streams, and the results obtained are better than those previously published in the literature.

  2. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz

    2016-12-29

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  3. Electricity network limitations on large-scale deployment of wind energy

    Energy Technology Data Exchange (ETDEWEB)

    Fairbairn, R.J.

    1999-07-01

    This report sought to identify limitation on large scale deployment of wind energy in the UK. A description of the existing electricity supply system in England, Scotland and Wales is given, and operational aspects of the integrated electricity networks, licence conditions, types of wind turbine generators, and the scope for deployment of wind energy in the UK are addressed. A review of technical limitations and technical criteria stipulated by the Distribution and Grid Codes, the effects of system losses, and commercial issues are examined. Potential solutions to technical limitations are proposed, and recommendations are outlined.

  4. Automated Stellar Classification for Large Surveys with EKF and RBF Neural Networks

    Institute of Scientific and Technical Information of China (English)

    Ling Bai; Ping Guo; Zhan-Yi Hu

    2005-01-01

    An automated classification technique for large size stellar surveys is proposed. It uses the extended Kalman filter as a feature selector and pre-classifier of the data, and the radial basis function neural networks for the classification.Experiments with real data have shown that the correct classification rate can reach as high as 93%, which is quite satisfactory. When different system models are selected for the extended Kalman filter, the classification results are relatively stable. It is shown that for this particular case the result using extended Kalman filter is better than using principal component analysis.

  5. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz; Ide, Anatole; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2016-01-01

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  6. Report for fiscal 2000 on electronic patient record network discussion committee. Survey on promotion of medical information use utilizing electronic patient record network; 2000 nendo denshi karute network kento iinkai hokokusho. Denshi karute network wo katsuyoshita iryo johoka no sokushin ni kansuru chosa

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    Based on the movements in the most advanced IT technologies and in social system reformation in the medical and health preservation fields, discussions were given on the assignments and measures to be solved to realize the medical information network, and the secondary utilization method of the medical information and the assignments and measures in the utilization thereof. A patient record is originally a document stating the secrets of a patient for his or her medical information, and has a nature that doctors may be sued from the patient if they disclose or exchange the document. There is a large number of company owners, politicians or salaried people who would not want their diseases which they had in the past, the name of the existing disease and medical treatment to be made public. The electronic patient record network has a conflicting proposition to elevate its values by means of data re-utilization, secondary utilization and information exchange. Preparation of the database requires multilateral analyses and classifications, as well as sufficient discussions and realistic execution including the consistency with the personal information protection law, as to whether it is information that the patient wants the exchange or disclosure, or whether it is information to be exchanged or disclosed even if the patient refuses it, not speak of attention to 5W1H. (NEDO)

  7. The Potential and Utilization of Unused Energy Sources for Large-Scale Horticulture Facility Applications under Korean Climatic Conditions

    Directory of Open Access Journals (Sweden)

    In Tak Hyun

    2014-07-01

    Full Text Available As the use of fossil fuel has increased, not only in construction, but also in agriculture due to the drastic industrial development in recent times, the problems of heating costs and global warming are getting worse. Therefore, introduction of more reliable and environmentally-friendly alternative energy sources has become urgent and the same trend is found in large-scale horticulture facilities. In this study, among many alternative energy sources, we investigated the reserves and the potential of various different unused energy sources which have infinite potential, but are nowadays wasted due to limitations in their utilization. In addition, we utilized available unused energy as a heat source for a heat pump in a large-scale horticulture facility and analyzed its feasibility through EnergyPlus simulation modeling. Accordingly, the discharge flow rate from the Fan Coil Unit (FCU in the horticulture facility, the discharge air temperature, and the return temperature were analyzed. The performance and heat consumption of each heat source were compared with those of conventional boilers. The result showed that the power load of the heat pump was decreased and thus the heat efficiency was increased as the temperature of the heat source was increased. Among the analyzed heat sources, power plant waste heat which had the highest heat source temperature consumed the least electric energy and showed the highest efficiency.

  8. Antithrombotic Utilization Trends after Noncardioembolic Ischemic Stroke or TIA in the Setting of Large Antithrombotic Trials (2002–2009)

    Science.gov (United States)

    Khan, Amir S.; Qureshi, Adnan I.

    2015-01-01

    Background and Purpose Several large trials published over the last decade have significantly altered recommended guidelines for therapy following a noncardioembolic ischemic stroke or transient ischemic attack (TIA). The impact of these studies on patient usage of alternative antithrombotic agents has hitherto not been evaluated. We examined the usage of these agents in the United States over the last decade, with regard to the publication of the Management of Atherothrombosis with Clopidogrel in High-Risk Patients (MATCH), European/Australasian Stroke Prevention in Reversible Ischaemia Trial (ESPRIT), and Prevention Regimen for Effectively Avoiding Second Strokes (PRoFESS) clinical trials, in order to test the hypothesis that resulting recommendations are reflected in usage trends. Methods Antithrombotic utilization was prospectively collected as part of the National Ambulatory Medical Care Survey (NAMCS) on a total of 53,608,351 patients in the United States between 2002 and 2009. Patients with a history of ischemic stroke or TIA were included. Patients were excluded if there was a prior history of subarachnoid or intracerebral hemorrhage, or if other indications for antithrombotic treatment were present, including deep venous thrombosis, pulmonary embolism, atrial fibrillation or flutter, mechanical cardiac valve replacement, congestive heart failure, coronary artery disease, peripheral arterial disease, and rheumatoid arthritis. Annual utilization of the following antithrombotic strategies was compared in 53,608,351 patients: 1) aspirin monotherapy, 2) clopidogrel monotherapy, 3) combined clopidogrel and aspirin, 4) combined extended-release dipyridamole (ERDP) and aspirin, and 5) warfarin. Annual utilization was compared before and after publication of MATCH, ESPRIT, and PRoFESS in 2004, 2006, and 2008, respectively. Trend analysis was performed with the Mantel–Haenszel test for trends. Sensitivity analysis of demographic and clinical characteristics

  9. Abnormal binding and disruption in large scale networks involved in human partial seizures

    Directory of Open Access Journals (Sweden)

    Bartolomei Fabrice

    2013-12-01

    Full Text Available There is a marked increase in the amount of electrophysiological and neuroimaging works dealing with the study of large scale brain connectivity in the epileptic brain. Our view of the epileptogenic process in the brain has largely evolved over the last twenty years from the historical concept of “epileptic focus” to a more complex description of “Epileptogenic networks” involved in the genesis and “propagation” of epileptic activities. In particular, a large number of studies have been dedicated to the analysis of intracerebral EEG signals to characterize the dynamic of interactions between brain areas during temporal lobe seizures. These studies have reported that large scale functional connectivity is dramatically altered during seizures, particularly during temporal lobe seizure genesis and development. Dramatic changes in neural synchrony provoked by epileptic rhythms are also responsible for the production of ictal symptoms or changes in patient’s behaviour such as automatisms, emotional changes or consciousness alteration. Beside these studies dedicated to seizures, large-scale network connectivity during the interictal state has also been investigated not only to define biomarkers of epileptogenicity but also to better understand the cognitive impairments observed between seizures.

  10. Large Scale Proteomic Data and Network-Based Systems Biology Approaches to Explore the Plant World.

    Science.gov (United States)

    Di Silvestre, Dario; Bergamaschi, Andrea; Bellini, Edoardo; Mauri, PierLuigi

    2018-06-03

    The investigation of plant organisms by means of data-derived systems biology approaches based on network modeling is mainly characterized by genomic data, while the potential of proteomics is largely unexplored. This delay is mainly caused by the paucity of plant genomic/proteomic sequences and annotations which are fundamental to perform mass-spectrometry (MS) data interpretation. However, Next Generation Sequencing (NGS) techniques are contributing to filling this gap and an increasing number of studies are focusing on plant proteome profiling and protein-protein interactions (PPIs) identification. Interesting results were obtained by evaluating the topology of PPI networks in the context of organ-associated biological processes as well as plant-pathogen relationships. These examples foreshadow well the benefits that these approaches may provide to plant research. Thus, in addition to providing an overview of the main-omic technologies recently used on plant organisms, we will focus on studies that rely on concepts of module, hub and shortest path, and how they can contribute to the plant discovery processes. In this scenario, we will also consider gene co-expression networks, and some examples of integration with metabolomic data and genome-wide association studies (GWAS) to select candidate genes will be mentioned.

  11. Working memory training mostly engages general-purpose large-scale networks for learning.

    Science.gov (United States)

    Salmi, Juha; Nyberg, Lars; Laine, Matti

    2018-03-21

    The present meta-analytic study examined brain activation changes following working memory (WM) training, a form of cognitive training that has attracted considerable interest. Comparisons with perceptual-motor (PM) learning revealed that WM training engages domain-general large-scale networks for learning encompassing the dorsal attention and salience networks, sensory areas, and striatum. Also the dynamics of the training-induced brain activation changes within these networks showed a high overlap between WM and PM training. The distinguishing feature for WM training was the consistent modulation of the dorso- and ventrolateral prefrontal cortex (DLPFC/VLPFC) activity. The strongest candidate for mediating transfer to similar untrained WM tasks was the frontostriatal system, showing higher striatal and VLPFC activations, and lower DLPFC activations after training. Modulation of transfer-related areas occurred mostly with longer training periods. Overall, our findings place WM training effects into a general perception-action cycle, where some modulations may depend on the specific cognitive demands of a training task. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    Science.gov (United States)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  13. Cooperative HARQ Assisted NOMA Scheme in Large-scale D2D Networks

    KAUST Repository

    Shi, Zheng

    2017-07-13

    This paper develops an interference aware design for cooperative hybrid automatic repeat request (HARQ) assisted non-orthogonal multiple access (NOMA) scheme for large-scale device-to-device (D2D) networks. Specifically, interference aware rate selection and power allocation are considered to maximize long term average throughput (LTAT) and area spectral efficiency (ASE). The design framework is based on stochastic geometry that jointly accounts for the spatial interference correlation at the NOMA receivers as well as the temporal interference correlation across HARQ transmissions. It is found that ignoring the effect of the aggregate interference, or overlooking the spatial and temporal correlation in interference, highly overestimates the NOMA performance and produces misleading design insights. An interference oblivious selection for the power and/or transmission rates leads to violating the network outage constraints. To this end, the results demonstrate the effectiveness of NOMA transmission and manifest the importance of the cooperative HARQ to combat the negative effect of the network aggregate interference. For instance, comparing to the non-cooperative HARQ assisted NOMA, the proposed scheme can yield an outage probability reduction by $32$%. Furthermore, an interference aware optimal design that maximizes the LTAT given outage constraints leads to $47$% throughput improvement over HARQ-assisted orthogonal multiple access (OMA) scheme.

  14. Examining Food Risk in the Large using a Complex, Networked System-of-sytems Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, John [Los Alamos National Laboratory; Newkirk, Ryan [U OF MINNESOTA; Mc Donald, Mark P [VANDERBILT U

    2010-12-03

    The food production infrastructure is a highly complex system of systems. Characterizing the risks of intentional contamination in multi-ingredient manufactured foods is extremely challenging because the risks depend on the vulnerabilities of food processing facilities and on the intricacies of the supply-distribution networks that link them. A pure engineering approach to modeling the system is impractical because of the overall system complexity and paucity of data. A methodology is needed to assess food contamination risk 'in the large', based on current, high-level information about manufacturing facilities, corrunodities and markets, that will indicate which food categories are most at risk of intentional contamination and warrant deeper analysis. The approach begins by decomposing the system for producing a multi-ingredient food into instances of two subsystem archetypes: (1) the relevant manufacturing and processing facilities, and (2) the networked corrunodity flows that link them to each other and consumers. Ingredient manufacturing subsystems are modeled as generic systems dynamics models with distributions of key parameters that span the configurations of real facilities. Networks representing the distribution systems are synthesized from general information about food corrunodities. This is done in a series of steps. First, probability networks representing the aggregated flows of food from manufacturers to wholesalers, retailers, other manufacturers, and direct consumers are inferred from high-level approximate information. This is followed by disaggregation of the general flows into flows connecting 'large' and 'small' categories of manufacturers, wholesalers, retailers, and consumers. Optimization methods are then used to determine the most likely network flows consistent with given data. Vulnerability can be assessed for a potential contamination point using a modified CARVER + Shock model. Once the facility and

  15. Formulation and Design of a CO2 Utilization Network Detailed Through a Conceptual Example

    DEFF Research Database (Denmark)

    Frauzem, Rebecca; Fjellerup, Kasper; Gani, Rafiqul

    information is available to describe the network mathematically, the most promising paths based on known technologies are designed and analyzed first. This makes the stages iterative rather than purely sequential. As part of this, the network is analyzed in the conceptual example of methanol synthesis via CO2...

  16. A Survey of K-12 Teachers' Utilization of Social Networks as a Professional Resource

    Science.gov (United States)

    Hunter, Leah J.; Hall, Cristin M.

    2018-01-01

    Teachers are increasingly using social networks, including social media and other Internet applications, to look for educational resources. This study shares results from a survey examining patterns of social network application use among K-12 teachers in the United States. A sample of 154 teachers (18 males, 136 females) in the United States…

  17. Utilizing Network QoS for Dependability of Adaptive Smart Grid Control

    DEFF Research Database (Denmark)

    Madsen, Jacob Theilgaard; Kristensen, Thomas le Fevre; Olsen, Rasmus Løvenstein

    2014-01-01

    A smart grid is a complex system consisting of a wide range of electric grid components, entities controlling power distribution, generation and consumption, and a communication network supporting data exchange. This paper focuses on the influence of imperfect network conditions on smart grid con...

  18. Restoring large-scale brain networks in PTSD and related disorders: a proposal for neuroscientifically-informed treatment interventions

    Directory of Open Access Journals (Sweden)

    Ruth A. Lanius

    2015-03-01

    Full Text Available Background: Three intrinsic connectivity networks in the brain, namely the central executive, salience, and default mode networks, have been identified as crucial to the understanding of higher cognitive functioning, and the functioning of these networks has been suggested to be impaired in psychopathology, including posttraumatic stress disorder (PTSD. Objective: 1 To describe three main large-scale networks of the human brain; 2 to discuss the functioning of these neural networks in PTSD and related symptoms; and 3 to offer hypotheses for neuroscientifically-informed interventions based on treating the abnormalities observed in these neural networks in PTSD and related disorders. Method: Literature relevant to this commentary was reviewed. Results: Increasing evidence for altered functioning of the central executive, salience, and default mode networks in PTSD has been demonstrated. We suggest that each network is associated with specific clinical symptoms observed in PTSD, including cognitive dysfunction (central executive network, increased and decreased arousal/interoception (salience network, and an altered sense of self (default mode network. Specific testable neuroscientifically-informed treatments aimed to restore each of these neural networks and related clinical dysfunction are proposed. Conclusions: Neuroscientifically-informed treatment interventions will be essential to future research agendas aimed at targeting specific PTSD and related symptoms.

  19. A triple network connectivity study of large-scale brain systems in cognitively normal APOE4 carriers

    Directory of Open Access Journals (Sweden)

    Xia Wu

    2016-09-01

    Full Text Available The triple network model, consisting of the central executive network, salience network and default mode network, has been recently employed to understand dysfunction in core networks across various disorders. Here we used the triple network model to investigate the large-scale brain networks in cognitively normal APOE4 carriers who are at risk of Alzheimer’s disease (AD. To explore the functional connectivity for each of the three networks and the effective connectivity among them, we evaluated 17 cognitively normal individuals with a family history of AD and at least one copy of the apolipoprotein e4 (APOE4 allele and compared the findings to those of 12 individuals who did not carry the APOE4 gene or have a family history of AD, using independent component analysis and Bayesian network approach. Our findings indicated altered within-network connectivity that suggests future cognitive decline risk, and preserved between-network connectivity that may support their current preserved cognition in the cognitively normal APOE4 allele carries. The study provides novel sights into our understanding of the risk factors for AD and their influence on the triple network model of major psychopathology.

  20. Analysis, calculation and utilization of the k-balance attribute in interdependent networks

    Science.gov (United States)

    Liu, Zheng; Li, Qing; Wang, Dan; Xu, Mingwei

    2018-05-01

    Interdependent networks, where two networks depend on each other, are becoming more and more significant in modern systems. From previous work, it can be concluded that interdependent networks are more vulnerable than a single network. The robustness in interdependent networks deserves special attention. In this paper, we propose a metric of robustness from a new perspective-the balance. First, we define the balance-coefficient of the interdependent system. Based on precise analysis and derivation, we prove some significant theories and provide an efficient algorithm to compute the balance-coefficient. Finally, we propose an optimal solution to reduce the balance-coefficient to enhance the robustness of the given system. Comprehensive experiments confirm the efficiency of our algorithms.

  1. A theoretical bilevel control scheme for power networks with large-scale penetration of distributed renewable resources

    DEFF Research Database (Denmark)

    Boroojeni, Kianoosh; Amini, M. Hadi; Nejadpak, Arash

    2016-01-01

    In this paper, we present a bilevel control framework to achieve a highly-reliable smart distribution network with large-scale penetration of distributed renewable resources (DRRs). We assume that the power distribution network consists of several residential/commercial communities. In the first ...

  2. Large-scale brain networks underlying language acquisition in early infancy

    Directory of Open Access Journals (Sweden)

    Fumitaka eHomae

    2011-05-01

    Full Text Available A critical issue in human development is that of whether the language-related areas in the left frontal and temporal regions work as a functional network in preverbal infants. Here, we used 94-channel near-infrared spectroscopy (NIRS to reveal the functional networks in the brains of sleeping 3-month-old infants with and without presenting speech sounds. During the first 3 min, we measured spontaneous brain activation (period 1. After period 1, we provided stimuli by playing Japanese sentences for 3 min (period 2. Finally, we measured brain activation for 3 min without providing the stimulus (period 3, as in period 1. We found that not only the bilateral temporal and temporoparietal regions but also the prefrontal and occipital regions showed oxygenated hemoglobin (oxy-Hb signal increases and deoxygenated hemoglobin (deoxy-Hb signal decreases when speech sounds were presented to infants. By calculating time-lagged cross-correlations and coherences of oxy-Hb signals between channels, we tested the functional connectivity for the 3 periods. The oxy-Hb signals in neighboring channels, as well as their homologous channels in the contralateral hemisphere, showed high correlation coefficients in period 1. Similar correlations were observed in period 2; however, the number of channels showing high correlations was higher in the ipsilateral hemisphere, especially in the anterior-posterior direction. The functional connectivity in period 3 showed a close relationship between the frontal and temporal regions, which was less prominent in period 1, indicating that these regions form the functional networks and work as a hysteresis system that has memory of the previous inputs. We propose a hypothesis that the spatiotemporally large-scale brain networks, including the frontal and temporal regions, underlie speech processing in infants and they might play important roles in language acquisition during infancy.

  3. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Directory of Open Access Journals (Sweden)

    C. Minaudo

    2018-04-01

    Full Text Available To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  4. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Choi Jeonghee

    2008-01-01

    Full Text Available Abstract So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  5. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Science.gov (United States)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  6. Pediatric disaster preparedness of a hospital network in a large metropolitan region.

    Science.gov (United States)

    Ferrer, Rizaldy R; Balasuriya, Darshi; Iverson, Ellen; Upperman, Jeffrey S

    2010-01-01

    We describe pediatric-related emergency experiences and responses, disaster preparation and planning, emergency plan execution and evaluation, and hospital pediatric capabilities and vulnerabilities among a disaster response network in a large urban county in the West Coast of the United States. Using semistructured key informant interviews, the authors conducted qualitative research between March and April 2008. Eleven hospitals and a representative from the community clinic association agreed to participate (86 percent response rate) and a total of 22 key informant interviews were completed. Data were analyzed using ATLAS.ti.v.5.0, a qualitative analytical software program. Although hospitals have infrastructure to respond in the event of a large-scale disaster, well-established disaster preparedness plans have not fully accounted for the needs of children. The general hospitals do not anticipate a surge of pediatric victims in the event of a disaster, and they expect that children will be transported to a children's hospital as their conditions become stable. Even hospitals with well-established disaster preparedness plans have not fully accounted for the needs of children during a disaster. Improved communication between disaster network hospitals is necessary as incorrect information still persists.

  7. A large interconnecting network within hybrid MEH-PPV/TiO2 nanorod photovoltaic devices

    International Nuclear Information System (INIS)

    Zeng, T-W; Lin, Y-Y; Lo, H-H; Chen, C-W; Chen, C-H; Liou, S-C; Huang, H-Y; Su, W-F

    2006-01-01

    This is a study of hybrid photovoltaic devices based on TiO 2 nanorods and poly[2-methoxy-5-(2'-ethyl-hexyloxy)-1,4-phenylene vinylene] (MEH-PPV). We use TiO 2 nanorods as the electron acceptors and conduction pathways. Here we describe how to develop a large interconnecting network within the photovoltaic device fabricated by inserting a layer of TiO 2 nanorods between the MEH-PPV:TiO 2 nanorod hybrid active layer and the aluminium electrode. The formation of a large interconnecting network provides better connectivity to the electrode, leading to a 2.5-fold improvement in external quantum efficiency as compared to the reference device without the TiO 2 nanorod layer. A power conversion efficiency of 2.2% under illumination at 565 nm and a maximum external quantum efficiency of 24% at 430 nm are achieved. A power conversion efficiency of 0.49% is obtained under Air Mass 1.5 illumination

  8. An Efficient Addressing Scheme and Its Routing Algorithm for a Large-Scale Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Yongwan Park

    2008-12-01

    Full Text Available So far, various addressing and routing algorithms have been extensively studied for wireless sensor networks (WSNs, but many of them were limited to cover less than hundreds of sensor nodes. It is largely due to stringent requirements for fully distributed coordination among sensor nodes, leading to the wasteful use of available address space. As there is a growing need for a large-scale WSN, it will be extremely challenging to support more than thousands of nodes, using existing standard bodies. Moreover, it is highly unlikely to change the existing standards, primarily due to backward compatibility issue. In response, we propose an elegant addressing scheme and its routing algorithm. While maintaining the existing address scheme, it tackles the wastage problem and achieves no additional memory storage during a routing. We also present an adaptive routing algorithm for location-aware applications, using our addressing scheme. Through a series of simulations, we prove that our approach can achieve two times lesser routing time than the existing standard in a ZigBee network.

  9. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    Science.gov (United States)

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.

  10. A large-scale perspective on stress-induced alterations in resting-state networks

    Science.gov (United States)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  11. Prediction of Thermal Environment in a Large Space Using Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Hyun-Jung Yoon

    2018-02-01

    Full Text Available Since the thermal environment of large space buildings such as stadiums can vary depending on the location of the stands, it is important to divide them into different zones and evaluate their thermal environment separately. The thermal environment can be evaluated using physical values measured with the sensors, but the occupant density of the stadium stands is high, which limits the locations available to install the sensors. As a method to resolve the limitations of installing the sensors, we propose a method to predict the thermal environment of each zone in a large space. We set six key thermal factors affecting the thermal environment in a large space to be predicted factors (indoor air temperature, mean radiant temperature, and clothing and the fixed factors (air velocity, metabolic rate, and relative humidity. Using artificial neural network (ANN models and the outdoor air temperature and the surface temperature of the interior walls around the stands as input data, we developed a method to predict the three thermal factors. Learning and verification datasets were established using STAR CCM+ (2016.10, Siemens PLM software, Plano, TX, USA. An analysis of each model’s prediction results showed that the prediction accuracy increased with the number of learning data points. The thermal environment evaluation process developed in this study can be used to control heating, ventilation, and air conditioning (HVAC facilities in each zone in a large space building with sufficient learning by ANN models at the building testing or the evaluation stage.

  12. Actin and microtubule networks contribute differently to cell response for small and large strains

    Science.gov (United States)

    Kubitschke, H.; Schnauss, J.; Nnetu, K. D.; Warmt, E.; Stange, R.; Kaes, J.

    2017-09-01

    Cytoskeletal filaments provide cells with mechanical stability and organization. The main key players are actin filaments and microtubules governing a cell’s response to mechanical stimuli. We investigated the specific influences of these crucial components by deforming MCF-7 epithelial cells at small (≤5% deformation) and large strains (>5% deformation). To understand specific contributions of actin filaments and microtubules, we systematically studied cellular responses after treatment with cytoskeleton influencing drugs. Quantification with the microfluidic optical stretcher allowed capturing the relative deformation and relaxation of cells under different conditions. We separated distinctive deformational and relaxational contributions to cell mechanics for actin and microtubule networks for two orders of magnitude of drug dosages. Disrupting actin filaments via latrunculin A, for instance, revealed a strain-independent softening. Stabilizing these filaments by treatment with jasplakinolide yielded cell softening for small strains but showed no significant change at large strains. In contrast, cells treated with nocodazole to disrupt microtubules displayed a softening at large strains but remained unchanged at small strains. Stabilizing microtubules within the cells via paclitaxel revealed no significant changes for deformations at small strains, but concentration-dependent impact at large strains. This suggests that for suspended cells, the actin cortex is probed at small strains, while at larger strains; the whole cell is probed with a significant contribution from the microtubules.

  13. LARGE-SCALE MECURY CONTROL TECHNOLOGY TESTING FOR LIGNITE-FIRED UTILITIES-OXIDATION SYSTEMS FOR WET FGD

    Energy Technology Data Exchange (ETDEWEB)

    Michael J. Holmes; Steven A. Benson; Jeffrey S. Thompson

    2004-03-01

    The Energy & Environmental Research Center (EERC) is conducting a consortium-based effort directed toward resolving the mercury (Hg) control issues facing the lignite industry. Specifically, the EERC team--the EERC, EPRI, URS, ADA-ES, Babcock & Wilcox, the North Dakota Industrial Commission, SaskPower, and the Mercury Task Force, which includes Basin Electric Power Cooperative, Otter Tail Power Company, Great River Energy, Texas Utilities (TXU), Montana-Dakota Utilities Co., Minnkota Power Cooperative, BNI Coal Ltd., Dakota Westmoreland Corporation, and the North American Coal Company--has undertaken a project to significantly and cost-effectively oxidize elemental mercury in lignite combustion gases, followed by capture in a wet scrubber. This approach will be applicable to virtually every lignite utility in the United States and Canada and potentially impact subbituminous utilities. The oxidation process is proven at the pilot-scale and in short-term full-scale tests. Additional optimization is continuing on oxidation technologies, and this project focuses on longer-term full-scale testing. The lignite industry has been proactive in advancing the understanding of and identifying control options for Hg in lignite combustion flue gases. Approximately 1 year ago, the EERC and EPRI began a series of Hg-related discussions with the Mercury Task Force as well as utilities firing Texas and Saskatchewan lignites. This project is one of three being undertaken by the consortium to perform large-scale Hg control technology testing to address the specific needs and challenges to be met in controlling Hg from lignite-fired power plants. This project involves Hg oxidation upstream of a system equipped with an electrostatic precipitator (ESP) followed by wet flue gas desulfurization (FGD). The team involved in conducting the technical aspects of the project includes the EERC, Babcock & Wilcox, URS, and ADA-ES. The host sites include Minnkota Power Cooperative Milton R. Young

  14. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically realized as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this work, we introduce a discrete event-based simulation tool that models the data flow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers, resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error of simulation when comparing the results to a large amount of real-world ope...

  15. Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    Data acquisition systems for large-scale high-energy physics experiments have to handle hundreds of gigabytes per second of data, and are typically implemented as specialized data centers that connect a very large number of front-end electronics devices to an event detection and storage system. The design of such systems is often based on many assumptions, small-scale experiments and a substantial amount of over-provisioning. In this paper, we introduce a discrete event-based simulation tool that models the dataflow of the current ATLAS data acquisition system, with the main goal to be accurate with regard to the main operational characteristics. We measure buffer occupancy counting the number of elements in buffers; resource utilization measuring output bandwidth and counting the number of active processing units, and their time evolution by comparing data over many consecutive and small periods of time. We perform studies on the error in simulation when comparing the results to a large amount of real-world ...

  16. Large research infrastrucures and networking. Two key factors for maintaining nuclear expertise in Europe

    International Nuclear Information System (INIS)

    Cognet, G.; Iracane, D.

    2004-01-01

    Large research infrastructures are of key importance to improve the efficiency and the safety of nuclear energy production. To support present and coming power reactors and fuel cycle facilities and to develop future systems, it is necessary to optimise these infrastructures and their use by taking into account the networking of existing facilities, the access by the European researchers to conduct their own research projects and the creation of new installations when facing ageing issues. Large infrastructures include material testing reactor, hot laboratories for material and fuel under irradiation studies, fuel cycle researches and facilities dedicated to severe accident studies. For example, the CEA severe accident study platform has been recently used by a Bulgarian team to conduct its own research project with a grant provided by the European Commission. Furthermore, because present European material testing reactors are ageing, renewing the irradiation capability is an important and structuring stake for the fission research in Europe in order to continue safe and optimised operations of existing reactors, to support Generation 4 RTD and to keep alive competences. Considering that, CEA has decided to launch the project Jules Horowitz aiming at building a new research reactor. The access to the CEA facilities, including the Jules Horowitz reactor, combined with equivalent possibilities of access to other European facilities through a specific platform would help to develop a long-term vision, to create a coherent and dynamic strategy, to contribute to the stimulation of a large cooperation on nuclear fission, to enable a common approach of safety issues, to gather competencies, to promote the attractiveness of nuclear research to young scientists and to maintain European nuclear expertise at the highest level. This paper intends to provide a view of the existing and needed infrastructures, discuss the ways of access and finally open the discussion on the

  17. Social networks as the context for understanding employment services utilization among homeless youth.

    Science.gov (United States)

    Barman-Adhikari, Anamika; Rice, Eric

    2014-08-01

    Little is known about the factors associated with use of employment services among homeless youth. Social network characteristics have been known to be influential in motivating people's decision to seek services. Traditional theoretical frameworks applied to studies of service use emphasize individual factors over social contexts and interactions. Using key social network, social capital, and social influence theories, this paper developed an integrated theoretical framework that capture the social network processes that act as barriers or facilitators of use of employment services by homeless youth, and understand empirically, the salience of each of these constructs in influencing the use of employment services among homeless youth. We used the "Event based-approach" strategy to recruit a sample of 136 homeless youth at one drop-in agency serving homeless youth in Los Angeles, California in 2008. The participants were queried regarding their individual and network characteristics. Data were entered into NetDraw 2.090 and the spring embedder routine was used to generate the network visualizations. Logistic regression was used to assess the influence of the network characteristics on use of employment services. The study findings suggest that social capital is more significant in understanding why homeless youth use employment services, relative to network structure and network influence. In particular, bonding and bridging social capital were found to have differential effects on use of employment services among this population. The results from this study provide specific directions for interventions aimed to increase use of employment services among homeless youth. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Exploring the utility of quantitative network design in evaluating Arctic sea ice thickness sampling strategies

    OpenAIRE

    Kaminski, T.; Kauker, F.; Eicken, H.; Karcher, M.

    2015-01-01

    We present a quantitative network design (QND) study of the Arctic sea ice-ocean system using a software tool that can evaluate hypothetical observational networks in a variational data assimilation system. For a demonstration, we evaluate two idealised flight transects derived from NASA's Operation IceBridge airborne ice surveys in terms of their potential to improve ten-day to five-month sea-ice forecasts. As target regions for the forecasts we select the Chukchi Sea, a...

  19. Large-scale grid management; Storskala Nettforvaltning

    Energy Technology Data Exchange (ETDEWEB)

    Langdal, Bjoern Inge; Eggen, Arnt Ove

    2003-07-01

    The network companies in the Norwegian electricity industry now have to establish a large-scale network management, a concept essentially characterized by (1) broader focus (Broad Band, Multi Utility,...) and (2) bigger units with large networks and more customers. Research done by SINTEF Energy Research shows so far that the approaches within large-scale network management may be structured according to three main challenges: centralization, decentralization and out sourcing. The article is part of a planned series.

  20. Polymer Optical Fiber Sensor and the Prediction of Sensor Response Utilizing Artificial Neural Networks

    Science.gov (United States)

    Haroglu, Derya

    characteristics: reproducibility, accuracy, selectivity, aging, and resolution. Artificial neural network (ANN), a mathematical model formed by mimicking the human nervous system, was used to predict the sensor response. Qwiknet (version 2.23) software was used to develop ANNs and according to the results of Qwiknet the prediction performances for training and testing data sets were 75%, and 83.33% respectively. In this dissertation, Chapter 1 describes the worldwide plastic optical fiber (POF) and fiber optic sensor markets, and the existing textile structures used in fiber optic sensing design particularly for the applications of biomedical and structural health monitoring (SHM). Chapter 2 provides a literature review in detail on polymer optical fibers, fiber optic sensors, and occupancy sensing in the passenger seats of automobiles. Chapter 3 includes the research objectives. Chapter 4 presents the response of POF to tensile loading, bending, and cyclic tensile loading with discussion parts. Chapter 5 includes an e-mail based survey to prioritize customer needs in a Quality Function Deployment (QFD) format utilizing Analytic Hierarchy Process (AHP) and survey results. Chapter 6 describes the POF sensor design and the behavior of it under pressure. Chapter 7 provides a data analysis based on the experimental results of Chapter 6. Chapter 8 presents the summary of this study and recommendations for future work.

  1. Large-size, high-uniformity, random silver nanowire networks as transparent electrodes for crystalline silicon wafer solar cells.

    Science.gov (United States)

    Xie, Shouyi; Ouyang, Zi; Jia, Baohua; Gu, Min

    2013-05-06

    Metal nanowire networks are emerging as next generation transparent electrodes for photovoltaic devices. We demonstrate the application of random silver nanowire networks as the top electrode on crystalline silicon wafer solar cells. The dependence of transmittance and sheet resistance on the surface coverage is measured. Superior optical and electrical properties are observed due to the large-size, highly-uniform nature of these networks. When applying the nanowire networks on the solar cells with an optimized two-step annealing process, we achieved as large as 19% enhancement on the energy conversion efficiency. The detailed analysis reveals that the enhancement is mainly caused by the improved electrical properties of the solar cells due to the silver nanowire networks. Our result reveals that this technology is a promising alternative transparent electrode technology for crystalline silicon wafer solar cells.

  2. Numerical Experiments on Advective Transport in Large Three-Dimensional Discrete Fracture Networks

    Science.gov (United States)

    Makedonska, N.; Painter, S. L.; Karra, S.; Gable, C. W.

    2013-12-01

    Modeling of flow and solute transport in discrete fracture networks is an important approach for understanding the migration of contaminants in impermeable hard rocks such as granite, where fractures provide dominant flow and transport pathways. The discrete fracture network (DFN) model attempts to mimic discrete pathways for fluid flow through a fractured low-permeable rock mass, and may be combined with particle tracking simulations to address solute transport. However, experience has shown that it is challenging to obtain accurate transport results in three-dimensional DFNs because of the high computational burden and difficulty in constructing a high-quality unstructured computational mesh on simulated fractures. An integrated DFN meshing [1], flow, and particle tracking [2] simulation capability that enables accurate flow and particle tracking simulation on large DFNs has recently been developed. The new capability has been used in numerical experiments on advective transport in large DFNs with tens of thousands of fractures and millions of computational cells. The modeling procedure starts from the fracture network generation using a stochastic model derived from site data. A high-quality computational mesh is then generated [1]. Flow is then solved using the highly parallel PFLOTRAN [3] code. PFLOTRAN uses the finite volume approach, which is locally mass conserving and thus eliminates mass balance problems during particle tracking. The flow solver provides the scalar fluxes on each control volume face. From the obtained fluxes the Darcy velocity is reconstructed for each node in the network [4]. Velocities can then be continuously interpolated to any point in the domain of interest, thus enabling random walk particle tracking. In order to describe the flow field on fractures intersections, the control volume cells on intersections are split into four planar polygons, where each polygon corresponds to a piece of a fracture near the intersection line. Thus

  3. Cosplicing network analysis of mammalian brain RNA-Seq data utilizing WGCNA and Mantel correlations

    Directory of Open Access Journals (Sweden)

    Ovidiu Dan Iancu

    2015-05-01

    Full Text Available Across species and tissues and especially in the mammalian brain, production of gene isoforms is widespread. While gene expression coordination has been previously described as a scale-free coexpression network, the properties of transcriptome-wide isoform production coordination have been less studied. Here we evaluate the system-level properties of cosplicing in mouse, macaque and human brain gene expression data using a novel network inference procedure. Genes are represented as vectors/lists of exon counts and distance measures sensitive to exon inclusion rates quantifies differences across samples. For all gene pairs, distance matrices are correlated across samples, resulting in cosplicing or co-transcriptional network matrices. We show that networks including cosplicing information are scale-free and distinct from coexpression. In the networks capturing cosplicing we find a set of novel hubs with unique characteristics distinguishing them from coexpression hubs: heavy representation in neurobiological functional pathways, strong overlap with markers of neurons and neuroglia, long coding lengths, and high number of both exons and annotated transcripts. Further, the cosplicing hubs are enriched in genes associated with autism spectrum disorders. Cosplicing hub homologs across eukaryotes show dramatically increasing intronic lengths but stable coding region lengths. Shared transcription factor binding sites increase coexpression but not cosplicing; the reverse is true for splicing-factor binding sites. Genes with protein-protein interactions have strong coexpression and cosplicing. Additional factors affecting the networks include shared microRNA binding sites, spatial colocalization within the striatum, and sharing a chromosomal folding domain. Cosplicing network patterns remain relatively stable across species.

  4. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks

    Directory of Open Access Journals (Sweden)

    Nandakumaran Nadarajah

    2018-04-01

    Full Text Available Precise point positioning (PPP and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic, can benefit enormously from the integration of multiple global navigation satellite systems (GNSS. In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.

  5. Social management of laboratory rhesus macaques housed in large groups using a network approach: A review.

    Science.gov (United States)

    McCowan, Brenda; Beisner, Brianne; Hannibal, Darcy

    2017-12-07

    Biomedical facilities across the nation and worldwide aim to develop cost-effective methods for the reproductive management of macaque breeding groups, typically by housing macaques in large, multi-male multi-female social groups that provide monkey subjects for research as well as appropriate socialization for their psychological well-being. One of the most difficult problems in managing socially housed macaques is their propensity for deleterious aggression. From a management perspective, deleterious aggression (as opposed to less intense aggression that serves to regulate social relationships) is undoubtedly the most problematic behavior observed in group-housed macaques, which can readily escalate to the degree that it causes social instability, increases serious physical trauma leading to group dissolution, and reduces psychological well-being. Thus for both welfare and other management reasons, aggression among rhesus macaques at primate centers and facilities needs to be addressed with a more proactive approach.Management strategies need to be instituted that maximize social housing while also reducing problematic social aggression due to instability using efficacious methods for detection and prevention in the most cost effective manner. Herein we review a new proactive approach using social network analysis to assess and predict deleterious aggression in macaque groups. We discovered three major pathways leading to instability, such as unusually high rates and severity of trauma and social relocations.These pathways are linked either directly or indirectly to network structure in rhesus macaque societies. We define these pathways according to the key intrinsic and extrinsic variables (e.g., demographic, genetic or social factors) that influence network and behavioral measures of stability (see Fig. 1). They are: (1) presence of natal males, (2) matrilineal genetic fragmentation, and (3) the power structure and conflict policing behavior supported by this

  6. Performance Evaluation of Hadoop-based Large-scale Network Traffic Analysis Cluster

    Directory of Open Access Journals (Sweden)

    Tao Ran

    2016-01-01

    Full Text Available As Hadoop has gained popularity in big data era, it is widely used in various fields. The self-design and self-developed large-scale network traffic analysis cluster works well based on Hadoop, with off-line applications running on it to analyze the massive network traffic data. On purpose of scientifically and reasonably evaluating the performance of analysis cluster, we propose a performance evaluation system. Firstly, we set the execution times of three benchmark applications as the benchmark of the performance, and pick 40 metrics of customized statistical resource data. Then we identify the relationship between the resource data and the execution times by a statistic modeling analysis approach, which is composed of principal component analysis and multiple linear regression. After training models by historical data, we can predict the execution times by current resource data. Finally, we evaluate the performance of analysis cluster by the validated predicting of execution times. Experimental results show that the predicted execution times by trained models are within acceptable error range, and the evaluation results of performance are accurate and reliable.

  7. Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration

    Science.gov (United States)

    Tsvetkova, Milena; García-Gavilanes, Ruth; Yasseri, Taha

    2016-11-01

    Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. In an attempt to understand the structural and temporal features of negative interactions in the community, we use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. We investigate how often and how fast pairs of reverts occur compared to a null model in order to control for patterns that are natural to the content production or are due to the internal rules of Wikipedia. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Even though the individual reverts might not necessarily be negative social interactions, our analysis points to the existence of certain patterns of negative social dynamics within the community of editors. Some of these patterns have not been previously explored and carry implications for the knowledge collection practice conducted on Wikipedia. Our method can be applied to other large-scale temporal collaboration networks to identify the existence of negative social interactions and other social processes.

  8. Traffic Flow Prediction Model for Large-Scale Road Network Based on Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhaosheng Yang

    2014-01-01

    Full Text Available To increase the efficiency and precision of large-scale road network traffic flow prediction, a genetic algorithm-support vector machine (GA-SVM model based on cloud computing is proposed in this paper, which is based on the analysis of the characteristics and defects of genetic algorithm and support vector machine. In cloud computing environment, firstly, SVM parameters are optimized by the parallel genetic algorithm, and then this optimized parallel SVM model is used to predict traffic flow. On the basis of the traffic flow data of Haizhu District in Guangzhou City, the proposed model was verified and compared with the serial GA-SVM model and parallel GA-SVM model based on MPI (message passing interface. The results demonstrate that the parallel GA-SVM model based on cloud computing has higher prediction accuracy, shorter running time, and higher speedup.

  9. Introduction to focus issue: Synchronization in large networks and continuous media—data, models, and supermodels

    Science.gov (United States)

    Duane, Gregory S.; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  10. Introduction to focus issue: Synchronization in large networks and continuous media-data, models, and supermodels.

    Science.gov (United States)

    Duane, Gregory S; Grabow, Carsten; Selten, Frank; Ghil, Michael

    2017-12-01

    The synchronization of loosely coupled chaotic systems has increasingly found applications to large networks of differential equations and to models of continuous media. These applications are at the core of the present Focus Issue. Synchronization between a system and its model, based on limited observations, gives a new perspective on data assimilation. Synchronization among different models of the same system defines a supermodel that can achieve partial consensus among models that otherwise disagree in several respects. Finally, novel methods of time series analysis permit a better description of synchronization in a system that is only observed partially and for a relatively short time. This Focus Issue discusses synchronization in extended systems or in components thereof, with particular attention to data assimilation, supermodeling, and their applications to various areas, from climate modeling to macroeconomics.

  11. Fault Detection for Large-Scale Railway Maintenance Equipment Base on Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Junfu Yu

    2014-04-01

    Full Text Available Focusing on the fault detection application for large-scale railway maintenance equipment with the specialties of low-cost, energy efficiency, collecting data of the function units. This paper proposed energy efficiency, convenient installation fault detection application using Sigsbee wireless sensor networks, which Sigsbee is the most widely used protocol based on IEEE 802.15.4. This paper proposed a systematic application from hardware design using STM32F103 chips as processer, to software system. Fault detection application is the basic part of the fault diagnose system, wireless sensor nodes of the fault detection application with different kinds of sensors for verities function units communication by Sigsbee to collecting and sending basic working status data to the home gateway, then data will be sent to the fault diagnose system.

  12. Generating functional analysis of complex formation and dissociation in large protein interaction networks

    International Nuclear Information System (INIS)

    Coolen, A C C; Rabello, S

    2009-01-01

    We analyze large systems of interacting proteins, using techniques from the non-equilibrium statistical mechanics of disordered many-particle systems. Apart from protein production and removal, the most relevant microscopic processes in the proteome are complex formation and dissociation, and the microscopic degrees of freedom are the evolving concentrations of unbound proteins (in multiple post-translational states) and of protein complexes. Here we only include dimer-complexes, for mathematical simplicity, and we draw the network that describes which proteins are reaction partners from an ensemble of random graphs with an arbitrary degree distribution. We show how generating functional analysis methods can be used successfully to derive closed equations for dynamical order parameters, representing an exact macroscopic description of the complex formation and dissociation dynamics in the infinite system limit. We end this paper with a discussion of the possible routes towards solving the nontrivial order parameter equations, either exactly (in specific limits) or approximately.

  13. Radiation protection of radioactively contaminated large areas by phytoremediation and subsequent utilization of the contaminated plant residues (PHYTOREST)

    International Nuclear Information System (INIS)

    Mirgorodsky, Daniel; Ollivier, Delphine; Merten, Dirk; Bergmann, Hans; Buechel, Georg; Willscher, Sabine; Wittig, Juliane; Jablonski, Lukasz; Werner, Peter

    2010-01-01

    Much progress has been achieved over the past 20 years in remediating sites contaminated by heavy metal. However, very large contaminated areas have presented major problems to this day because of remediation costs. Phytoremediation is a new, emerging, sustainable technique of remediating areas with low heavy-metal contamination. One advantage of phytoremediation is the comparatively low cost of the process, which may make it usable also on large areas with low levels of contamination. Besides extracting and immobilizing metals, respectively, phytoremediation among other things also contributes to improving soil quality in terms of physics, chemistry, and ecology. Consequently, phytoremediation offers a great potential for the future. Research into phytoremediation of an area contaminated by heavy metals and radionuclides is carried out on a site in a former uranium mining district in Eastern Thuringia jointly by the Friedrich Schiller University, Jena, and the Technical University of Dresden in a project funded by the German Federal Ministry for Education and Research. The project serves to promote the introduction of soft, biocompatible methods of long-term remediation and to develop conceptual solutions to the subsequent utilization of contaminated plant residues. Optimizing area management is in the focus of phytoremediation studies. (orig.)

  14. Atypical language laterality is associated with large-scale disruption of network integration in children with intractable focal epilepsy.

    Science.gov (United States)

    Ibrahim, George M; Morgan, Benjamin R; Doesburg, Sam M; Taylor, Margot J; Pang, Elizabeth W; Donner, Elizabeth; Go, Cristina Y; Rutka, James T; Snead, O Carter

    2015-04-01

    Epilepsy is associated with disruption of integration in distributed networks, together with altered localization for functions such as expressive language. The relation between atypical network connectivity and altered localization is unknown. In the current study we tested whether atypical expressive language laterality was associated with the alteration of large-scale network integration in children with medically-intractable localization-related epilepsy (LRE). Twenty-three right-handed children (age range 8-17) with medically-intractable LRE performed a verb generation task in fMRI. Language network activation was identified and the Laterality index (LI) was calculated within the pars triangularis and pars opercularis. Resting-state data from the same cohort were subjected to independent component analysis. Dual regression was used to identify associations between resting-state integration and LI values. Higher positive values of the LI, indicating typical language localization were associated with stronger functional integration of various networks including the default mode network (DMN). The normally symmetric resting-state networks showed a pattern of lateralized connectivity mirroring that of language function. The association between atypical language localization and network integration implies a widespread disruption of neural network development. These findings may inform the interpretation of localization studies by providing novel insights into reorganization of neural networks in epilepsy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Large-Scale Functional Brain Network Abnormalities in Alzheimer’s Disease: Insights from Functional Neuroimaging

    Directory of Open Access Journals (Sweden)

    Bradford C. Dickerson

    2009-01-01

    Full Text Available Functional MRI (fMRI studies of mild cognitive impairment (MCI and Alzheimer’s disease (AD have begun to reveal abnormalities in large-scale memory and cognitive brain networks. Since the medial temporal lobe (MTL memory system is a site of very early pathology in AD, a number of studies have focused on this region of the brain. Yet it is clear that other regions of the large-scale episodic memory network are affected early in the disease as well, and fMRI has begun to illuminate functional abnormalities in frontal, temporal, and parietal cortices as well in MCI and AD. Besides predictable hypoactivation of brain regions as they accrue pathology and undergo atrophy, there are also areas of hyperactivation in brain memory and cognitive circuits, possibly representing attempted compensatory activity. Recent fMRI data in MCI and AD are beginning to reveal relationships between abnormalities of functional activity in the MTL memory system and in functionally connected brain regions, such as the precuneus. Additional work with “resting state” fMRI data is illuminating functional-anatomic brain circuits and their disruption by disease. As this work continues to mature, it will likely contribute to our understanding of fundamental memory processes in the human brain and how these are perturbed in memory disorders. We hope these insights will translate into the incorporation of measures of task-related brain function into diagnostic assessment or therapeutic monitoring, which will hopefully one day be useful for demonstrating beneficial effects of treatments being tested in clinical trials.

  16. Interference Calculus A General Framework for Interference Management and Network Utility Optimization

    CERN Document Server

    Schubert, Martin

    2012-01-01

    This book develops a mathematical framework for modeling and optimizing interference-coupled multiuser systems. At the core of this framework is the concept of general interference functions, which provides a simple means of characterizing interdependencies between users. The entire analysis builds on the two core axioms scale-invariance and monotonicity. The proposed network calculus has its roots in power control theory and wireless communications. It adds theoretical tools for analyzing the typical behavior of interference-coupled networks. In this way it complements existing game-theoretic approaches. The framework should also be viewed in conjunction with optimization theory. There is a fruitful interplay between the theory of interference functions and convex optimization theory. By jointly exploiting the properties of interference functions, it is possible to design algorithms that outperform general-purpose techniques that only exploit convexity. The title “network calculus” refers to the fact tha...

  17. Gift-giving and network structure in rural China: utilizing long-term spontaneous gift records.

    Science.gov (United States)

    Chen, Xi

    2014-01-01

    The tradition of keeping written records of gift received during household ceremonies in many countries offers researchers an underutilized means of data collection for social network analysis. This paper first summarizes unique features of the gift record data that circumvent five prevailing sampling and measurement issues in the literature, and we discuss their advantages over existing studies at both the individual level and the dyadic link level using previous data sources. We then document our research project in rural China that implements a multiple wave census-type household survey and a long-term gift record collection. The pattern of gift-giving in major household social events and its recent escalation is analyzed. There are significantly positive correlations between gift network centrality and various forms of informal insurance. Finally, economic inequality and competitive marriage market are among the main demographic and socioeconomic determinants of the observed gift network structure.

  18. Correction: Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4

    DEFF Research Database (Denmark)

    Jensen, Søren Højgaard; Graves, Christopher R.; Mogensen, Mogens Bjerg

    2017-01-01

    Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479.......Correction for ‘Large-scale electricity storage utilizing reversible solid oxide cells combined with underground storage of CO2 and CH4’ by S. H. Jensen et al., Energy Environ. Sci., 2015, 8, 2471–2479....

  19. Effective Utilization of Resources and Infrastructure for a Spaceport Network Architecture

    Science.gov (United States)

    Gill, Tracy; Larson, Wiley; Mueller, Robert; Roberson, Luke

    2012-01-01

    Providing routine, affordable access to a variety of orbital and deep space destinations requires an intricate network of ground, planetary surface, and space-based spaceports like those on Earth (land and sea), in various Earth orbits, and on other extraterrestrial surfaces. Advancements in technology and international collaboration are critical to establish a spaceport network that satisfies the requirements for private and government research, exploration, and commercial objectives. Technologies, interfaces, assembly techniques, and protocols must be adapted to enable mission critical capabilities and interoperability throughout the spaceport network. The conceptual space mission architecture must address the full range of required spaceport services, from managing propellants for a variety of spacecraft to governance structure. In order to accomplish affordability and sustainability goals, the network architecture must consider deriving propellants from in situ planetary resources to the maximum extent possible. Water on the Moon and Mars, Mars' atmospheric CO2, and O2 extracted from lunar regolith are examples of in situ resources that could be used to generate propellants for various spacecraft, orbital stages and trajectories, and the commodities to support habitation and human operations at these destinations. The ability to use in-space fuel depots containing in situ derived propellants would drastically reduce the mass required to launch long-duration or deep space missions from Earth's gravity well. Advances in transformative technologies and common capabilities, interfaces, umbilicals, commodities, protocols, and agreements will facilitate a cost-effective, safe, reliable infrastructure for a versatile network of Earth- and extraterrestrial spaceports. Defining a common infrastructure on Earth, planetary surfaces, and in space, as well as deriving propellants from in situ planetary resources to construct in-space propellant depots to serve the spaceport

  20. Local health care system utilizing the LPG (liquid propane gas) network.

    Science.gov (United States)

    Umemoto, T; Hoshi, H; Tsuda, M; Horio, S; Itou, N; Neriki, T

    1998-07-01

    JAC's LPG monitoring network system is mainly provided in mountain villages. However, by using this system, it will be possible to start a Digital Network Program for the Elderly while maintaining superior economic feasibility and public benefit using existing information infrastructures. This project also has the capabilities for the creation of a fire/disaster monitoring system, as well as a health care system by using conventional LPG monitoring systems. Telemedicine is an option for the future, as well, by connecting medical equipment and a tele-conferencing system.

  1. Utilizing Social Network Services for Enhanced Communication with Elderly Living at Home

    DEFF Research Database (Denmark)

    Wagner, Stefan

    2009-01-01

    This paper discusses whether social network services, like Facebook and Twitter, may be used by elderly living in their own homes to enhance communication with their relatives and friends. It introduces a prototype solution based on the iRobot Roomba 560, iRobot, USA, robot vacuum cleaner, which...... has been enhanced with Facebook and Twitter communication capabilities. The paper points out a number of other relevant applications where the use of social network services may provide better communication for ambient assisted living solutions and intelligent environments....

  2. Microbial network for waste activated sludge cascade utilization in an integrated system of microbial electrolysis and anaerobic fermentation

    DEFF Research Database (Denmark)

    Liu, Wenzong; He, Zhangwei; Yang, Chunxue

    2016-01-01

    in an integrated system of microbial electrolysis cell (MEC) and anaerobic digestion (AD) for waste activated sludge (WAS). Microbial communities in integrated system would build a thorough energetic and metabolic interaction network regarding fermentation communities and electrode respiring communities...... to Firmicutes (Acetoanaerobium, Acetobacterium, and Fusibacter) showed synergistic relationship with exoelectrogensin the degradation of complex organic matter or recycling of MEC products (H2). High protein and polysaccharide but low fatty acid content led to the dominance of Proteiniclasticum...... biofilm. The overall performance of WAS cascade utilization was substantially related to the microbial community structures, which in turn depended on the initial pretreatment to enhance WAS fermentation. It is worth noting that species in AD and MEC communities are able to build complex networks...

  3. Practice Innovation, Health Care Utilization and Costs in a Network of Federally Qualified Health Centers and Hospitals for Medicaid Enrollees.

    Science.gov (United States)

    Johnson, Tricia J; Jones, Art; Lulias, Cheryl; Perry, Anthony

    2018-06-01

    State Medicaid programs need cost-effective strategies to provide high-quality care that is accessible to individuals with low incomes and limited resources. Integrated delivery systems have been formed to provide care across the continuum, but creating a shared vision for improving community health can be challenging. Medical Home Network was created as a network of primary care providers and hospital systems providing care to Medicaid enrollees, guided by the principles of egalitarian governance, practice-level care coordination, real-time electronic alerts, and pay-for-performance incentives. This analysis of health care utilization and costs included 1,189,195 Medicaid enrollees. After implementation of Medical Home Network, a risk-adjusted increase of $9.07 or 4.3% per member per month was found over the 2 years of implementation compared with an increase of $17.25 or 9.3% per member per month, before accounting for the cost of care management fees and other financial incentives, for Medicaid enrollees within the same geographic area with a primary care provider outside of Medical Home Network. After accounting for care coordination fees paid to providers, the net risk-adjusted cost reduction was $11.0 million.

  4. Autonomous construction agents: An investigative framework for large sensor network self-management

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Joshua Bruce [Iowa State Univ., Ames, IA (United States)

    2008-01-01

    Recent technological advances have made it cost effective to utilize massive, heterogeneous sensor networks. To gain appreciable value from these informational systems, there must be a control scheme that coordinates information flow to produce meaningful results. This paper will focus on tools developed to manage the coordination of autonomous construction agents using stigmergy, in which a set of basic low-level rules are implemented through various environmental cues. Using VE-Suite, an open-source virtual engineering software package, an interactive environment is created to explore various informational configurations for the construction problem. A simple test case is developed within the framework, and construction times are analyzed for possible functional relationships pertaining to performance of a particular set of parameters and a given control process. Initial experiments for the test case show sensor saturation occurs relatively quickly with 5-7 sensors, and construction time is generally independent of sensor range except for small numbers of sensors. Further experiments using this framework are needed to define other aspects of sensor performance. These trends can then be used to help decide what kinds of sensing capabilities are required to simultaneously achieve the most cost-effective solution and provide the required value of information when applied to the development of real world sensor applications.

  5. Efficient Utilization of Hierarchical iJTAG Networks for Interrupts Management

    NARCIS (Netherlands)

    Ibrahim, Ahmed Mohammed Youssef; Kerkhoff, Hans G.

    2016-01-01

    Modern systems-on-chips rely on embedded instruments for testing and debugging, the same instruments could be used for managing the lifetime dependability of the chips. The IEEE 1687 (iJTAG) standard introduces an access network to the instruments based on reconfigurable scan paths. During lifetime,

  6. Common mycorrhizal networks amplify competition by preferential mineral nutrient allocation to large host plants.

    Science.gov (United States)

    Weremijewicz, Joanna; Sternberg, Leonel da Silveira Lobo O'Reilly; Janos, David P

    2016-10-01

    Arbuscular mycorrhizal (AM) fungi interconnect plants in common mycorrhizal networks (CMNs) which can amplify competition among neighbors. Amplified competition might result from the fungi supplying mineral nutrients preferentially to hosts that abundantly provide fixed carbon, as suggested by research with organ-cultured roots. We examined whether CMNs supplied (15) N preferentially to large, nonshaded, whole plants. We conducted an intraspecific target-neighbor pot experiment with Andropogon gerardii and several AM fungi in intact, severed or prevented CMNs. Neighbors were supplied (15) N, and half of the target plants were shaded. Intact CMNs increased target dry weight (DW), intensified competition and increased size inequality. Shading decreased target weight, but shaded plants in intact CMNs had mycorrhizal colonization similar to that of sunlit plants. AM fungi in intact CMNs acquired (15) N from the substrate of neighbors and preferentially allocated it to sunlit, large, target plants. Sunlit, intact CMN, target plants acquired as much as 27% of their nitrogen from the vicinity of their neighbors, but shaded targets did not. These results suggest that AM fungi in CMNs preferentially provide mineral nutrients to those conspecific host individuals best able to provide them with fixed carbon or representing the strongest sinks, thereby potentially amplifying asymmetric competition below ground. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.

  7. a Stochastic Approach to Multiobjective Optimization of Large-Scale Water Reservoir Networks

    Science.gov (United States)

    Bottacin-Busolin, A.; Worman, A. L.

    2013-12-01

    A main challenge for the planning and management of water resources is the development of multiobjective strategies for operation of large-scale water reservoir networks. The optimal sequence of water releases from multiple reservoirs depends on the stochastic variability of correlated hydrologic inflows and on various processes that affect water demand and energy prices. Although several methods have been suggested, large-scale optimization problems arising in water resources management are still plagued by the high dimensional state space and by the stochastic nature of the hydrologic inflows. In this work, the optimization of reservoir operation is approached using approximate dynamic programming (ADP) with policy iteration and function approximators. The method is based on an off-line learning process in which operating policies are evaluated for a number of stochastic inflow scenarios, and the resulting value functions are used to design new, improved policies until convergence is attained. A case study is presented of a multi-reservoir system in the Dalälven River, Sweden, which includes 13 interconnected reservoirs and 36 power stations. Depending on the late spring and summer peak discharges, the lowlands adjacent to Dalälven can often be flooded during the summer period, and the presence of stagnating floodwater during the hottest months of the year is the cause of a large proliferation of mosquitos, which is a major problem for the people living in the surroundings. Chemical pesticides are currently being used as a preventive countermeasure, which do not provide an effective solution to the problem and have adverse environmental impacts. In this study, ADP was used to analyze the feasibility of alternative operating policies for reducing the flood risk at a reasonable economic cost for the hydropower companies. To this end, mid-term operating policies were derived by combining flood risk reduction with hydropower production objectives. The performance

  8. Analysis of a utility-interactive wind-photovoltaic hybrid system with battery storage using neural network

    Science.gov (United States)

    Giraud, Francois

    1999-10-01

    This dissertation investigates the application of neural network theory to the analysis of a 4-kW Utility-interactive Wind-Photovoltaic System (WPS) with battery storage. The hybrid system comprises a 2.5-kW photovoltaic generator and a 1.5-kW wind turbine. The wind power generator produces power at variable speed and variable frequency (VSVF). The wind energy is converted into dc power by a controlled, tree-phase, full-wave, bridge rectifier. The PV power is maximized by a Maximum Power Point Tracker (MPPT), a dc-to-dc chopper, switching at a frequency of 45 kHz. The whole dc power of both subsystems is stored in the battery bank or conditioned by a single-phase self-commutated inverter to be sold to the utility at a predetermined amount. First, the PV is modeled using Artificial Neural Network (ANN). To reduce model uncertainty, the open-circuit voltage VOC and the short-circuit current ISC of the PV are chosen as model input variables of the ANN. These input variables have the advantage of incorporating the effects of the quantifiable and non-quantifiable environmental variants affecting the PV power. Then, a simplified way to predict accurately the dynamic responses of the grid-linked WPS to gusty winds using a Recurrent Neural Network (RNN) is investigated. The RNN is a single-output feedforward backpropagation network with external feedback, which allows past responses to be fed back to the network input. In the third step, a Radial Basis Functions (RBF) Network is used to analyze the effects of clouds on the Utility-Interactive WPS. Using the irradiance as input signal, the network models the effects of random cloud movement on the output current, the output voltage, the output power of the PV system, as well as the electrical output variables of the grid-linked inverter. Fourthly, using RNN, the combined effects of a random cloud and a wind gusts on the system are analyzed. For short period intervals, the wind speed and the solar radiation are considered as

  9. An Approach for R&D Partner Selection in Alliances between Large Companies, and Small and Medium Enterprises (SMEs: Application of Bayesian Network and Patent Analysis

    Directory of Open Access Journals (Sweden)

    Keeeun Lee

    2016-01-01

    Full Text Available The enhanced R&D cooperative efforts between large firms and small and medium-sized enterprises (SMEs have been emphasized to perform innovation projects and succeed in deploying profitable businesses. In order to promote such win-win alliances, it is necessary to consider the capabilities of large firms and SMEs, respectively. Thus, this paper proposes a new approach of partner selection when a large firm assesses SMEs as potential candidates for R&D collaboration. The first step of the suggested approach is to define the necessary technology for a firm by referring to a structured technology roadmap, which is a useful technique in the partner selection from the perspectives of a large firm. Second, a list of appropriate SME candidates is generated by patent information. Finally, a Bayesian network model is formulated to select an SME as an R&D collaboration partner which fits in the industry and the large firm by utilizing a bibliography with United States patents. This paper applies the proposed approach to the semiconductor industry and selects potential R&D partners for a large firm. This paper will explain how to use the model as a systematic and analytic approach for creating effective partnerships between large firms and SMEs.

  10. Estimating surface longwave radiative fluxes from satellites utilizing artificial neural networks

    Science.gov (United States)

    Nussbaumer, Eric A.; Pinker, Rachel T.

    2012-04-01

    A novel approach for calculating downwelling surface longwave (DSLW) radiation under all sky conditions is presented. The DSLW model (hereafter, DSLW/UMD v2) similarly to its predecessor, DSLW/UMD v1, is driven with a combination of Moderate Resolution Imaging Spectroradiometer (MODIS) level-3 cloud parameters and information from the European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-Interim model. To compute the clear sky component of DSLW a two layer feed-forward artificial neural network with sigmoid hidden neurons and linear output neurons is implemented; it is trained with simulations derived from runs of the Rapid Radiative Transfer Model (RRTM). When computing the cloud contribution to DSLW, the cloud base temperature is estimated by using an independent artificial neural network approach of similar architecture as previously mentioned, and parameterizations. The cloud base temperature neural network is trained using spatially and temporally co-located MODIS and CloudSat Cloud Profiling Radar (CPR) and the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) observations. Daily average estimates of DSLW from 2003 to 2009 are compared against ground measurements from the Baseline Surface Radiation Network (BSRN) giving an overall correlation coefficient of 0.98, root mean square error (rmse) of 15.84 W m-2, and a bias of -0.39 W m-2. This is an improvement over an earlier version of the model (DSLW/UMD v1) which for the same time period has an overall correlation coefficient 0.97 rmse of 17.27 W m-2, and bias of 0.73 W m-2.

  11. Utilizing artificial neural networks to predict demand for weather-sensitive products at retail stores

    OpenAIRE

    Taghizadeh, Elham

    2017-01-01

    One key requirement for effective supply chain management is the quality of its inventory management. Various inventory management methods are typically employed for different types of products based on their demand patterns, product attributes, and supply network. In this paper, our goal is to develop robust demand prediction methods for weather sensitive products at retail stores. We employ historical datasets from Walmart, whose customers and markets are often exposed to extreme weather ev...

  12. Developing Large-Scale Bayesian Networks by Composition: Fault Diagnosis of Electrical Power Systems in Aircraft and Spacecraft

    Science.gov (United States)

    Mengshoel, Ole Jakob; Poll, Scott; Kurtoglu, Tolga

    2009-01-01

    This CD contains files that support the talk (see CASI ID 20100021404). There are 24 models that relate to the ADAPT system and 1 Excel worksheet. In the paper an investigation into the use of Bayesian networks to construct large-scale diagnostic systems is described. The high-level specifications, Bayesian networks, clique trees, and arithmetic circuits representing 24 different electrical power systems are described in the talk. The data in the CD are the models of the 24 different power systems.

  13. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    OpenAIRE

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are inve...

  14. Fast and accurate solution for the SCUC problem in large-scale power systems using adapted binary programming and enhanced dual neural network

    International Nuclear Information System (INIS)

    Shafie-khah, M.; Moghaddam, M.P.; Sheikh-El-Eslami, M.K.; Catalão, J.P.S.

    2014-01-01

    Highlights: • A novel hybrid method based on decomposition of SCUC into QP and BP problems is proposed. • An adapted binary programming and an enhanced dual neural network model are applied. • The proposed EDNN is exactly convergent to the global optimal solution of QP. • An AC power flow procedure is developed for including contingency/security issues. • It is suited for large-scale systems, providing both accurate and fast solutions. - Abstract: This paper presents a novel hybrid method for solving the security constrained unit commitment (SCUC) problem. The proposed formulation requires much less computation time in comparison with other methods while assuring the accuracy of the results. Furthermore, the framework provided here allows including an accurate description of warmth-dependent startup costs, valve point effects, multiple fuel costs, forbidden zones of operation, and AC load flow bounds. To solve the nonconvex problem, an adapted binary programming method and enhanced dual neural network model are utilized as optimization tools, and a procedure for AC power flow modeling is developed for including contingency/security issues, as new contributions to earlier studies. Unlike classical SCUC methods, the proposed method allows to simultaneously solve the unit commitment problem and comply with the network limits. In addition to conventional test systems, a real-world large-scale power system with 493 units has been used to fully validate the effectiveness of the novel hybrid method proposed

  15. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  16. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  17. An investigation of scalable anomaly detection techniques for a large network of Wi-Fi hotspots

    CSIR Research Space (South Africa)

    Machaka, P

    2015-01-01

    Full Text Available . The Neural Networks, Bayesian Networks and Artificial Immune Systems were used for this experiment. Using a set of data extracted from a live network of Wi-Fi hotspots managed by an ISP; we integrated algorithms into a data collection system to detect...

  18. Enhancement of a model for Large-scale Airline Network Planning Problems

    NARCIS (Netherlands)

    Kölker, K.; Lopes dos Santos, Bruno F.; Lütjens, K.

    2016-01-01

    The main focus of this study is to solve the network planning problem based on passenger decision criteria including the preferred departure time and travel time for a real-sized airline network. For this purpose, a model of the integrated network planning problem is formulated including scheduling

  19. The index-based subgraph matching algorithm (ISMA: fast subgraph enumeration in large networks using optimized search trees.

    Directory of Open Access Journals (Sweden)

    Sofie Demeyer

    Full Text Available Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA, a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/.

  20. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    Science.gov (United States)

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730