WorldWideScience

Sample records for network field sampling

  1. Efficient compressive sampling of spatially sparse fields in wireless sensor networks

    Science.gov (United States)

    Colonnese, Stefania; Cusani, Roberto; Rinauro, Stefano; Ruggiero, Giorgia; Scarano, Gaetano

    2013-12-01

    Wireless sensor networks (WSNs), i.e., networks of autonomous, wireless sensing nodes spatially deployed over a geographical area, are often faced with acquisition of spatially sparse fields. In this paper, we present a novel bandwidth/energy-efficient compressive sampling (CS) scheme for the acquisition of spatially sparse fields in a WSN. The paper contribution is twofold. Firstly, we introduce a sparse, structured CS matrix and analytically show that it allows accurate reconstruction of bidimensional spatially sparse signals, such as those occurring in several surveillance application. Secondly, we analytically evaluate the energy and bandwidth consumption of our CS scheme when it is applied to data acquisition in a WSN. Numerical results demonstrate that our CS scheme achieves significant energy and bandwidth savings with respect to state-of-the-art approaches when employed for sensing a spatially sparse field by means of a WSN.

  2. The fractality of marine measurement networks and of the Earth's sampled magnetic field

    Directory of Open Access Journals (Sweden)

    L. Tramontana

    1996-06-01

    Full Text Available We highlight the fractal behaviour of marine measurement networks when determining the Earth's total magnetic field and the spatial trend of the field itself. This approach is a convenient alternative method of assessing the coverage of an area by a set of measurements whenever the environmental situations do not permit a regular distribution of the measurement points. The Earth's magnetic field is sampled in marine areas when the measuring apparatus is moving, even at low speeds, whilst attempts are made to respect the spatial planning which has been pre-determined on the basis of the resolution sought after. However, the real distribution of the measurements presents numerous disturbances which are mainly due to environmental factors. In the case of distributions containing vast areas with no measurement points it is no longer possible to apply Shannon's theorem in 1-D and 2-D. In our paper we apply the fractal theory to certain 1-D and 2-D measurement distributions order to obtain a coverage estimate of the area and the capacity of reconstructing the field. We also examine the trend of the power spectra S of numerous magnetic profiles noting that almost all of them illustrate the dependency with the frequency f in the form S » f-b which is characteristic (necessary condition of self-similar or self affine fractals.

  3. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    OpenAIRE

    Mouw, Ted; Verdery, Ashton M.

    2012-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its ...

  4. Bayesian prediction and adaptive sampling algorithms for mobile sensor networks online environmental field reconstruction in space and time

    CERN Document Server

    Xu, Yunfei; Dass, Sarat; Maiti, Tapabrata

    2016-01-01

    This brief introduces a class of problems and models for the prediction of the scalar field of interest from noisy observations collected by mobile sensor networks. It also introduces the problem of optimal coordination of robotic sensors to maximize the prediction quality subject to communication and mobility constraints either in a centralized or distributed manner. To solve such problems, fully Bayesian approaches are adopted, allowing various sources of uncertainties to be integrated into an inferential framework effectively capturing all aspects of variability involved. The fully Bayesian approach also allows the most appropriate values for additional model parameters to be selected automatically by data, and the optimal inference and prediction for the underlying scalar field to be achieved. In particular, spatio-temporal Gaussian process regression is formulated for robotic sensors to fuse multifactorial effects of observations, measurement noise, and prior distributions for obtaining the predictive di...

  5. Assessment of the Worldwide Antimalarial Resistance Network Standardized Procedure for In Vitro Malaria Drug Sensitivity Testing Using SYBR Green Assay for Field Samples with Various Initial Parasitemia Levels.

    Science.gov (United States)

    Cheruiyot, Agnes C; Auschwitz, Jennifer M; Lee, Patricia J; Yeda, Redemptah A; Okello, Charles O; Leed, Susan E; Talwar, Mayank; Murthy, Tushar; Gaona, Heather W; Hickman, Mark R; Akala, Hoseah M; Kamau, Edwin; Johnson, Jacob D

    2016-04-01

    The malaria SYBR green assay, which is used to profilein vitrodrug susceptibility ofPlasmodium falciparum, is a reliable drug screening and surveillance tool. Malaria field surveillance efforts provide isolates with various low levels of parasitemia. To be advantageous, malaria drug sensitivity assays should perform reproducibly among various starting parasitemia levels rather than at one fixed initial value. We examined the SYBR green assay standardized procedure developed by the Worldwide Antimalarial Resistance Network (WWARN) for its sensitivity and ability to accurately determine the drug concentration that inhibits parasite growth by 50% (IC50) in samples with a range of initial parasitemia levels. The initial sensitivity determination of the WWARN procedure yielded a detection limit of 0.019% parasitemia.P. falciparumlaboratory strains and field isolates with various levels of initial parasitemia were then subjected to a range of doses of common antimalarials. The IC50s were comparable for laboratory strains with between 0.0375% and 0.6% parasitemia and for field isolates with between 0.075% and 0.6% parasitemia for all drugs tested. Furthermore, assay quality (Z') analysis indicated that the WWARN procedure displays high robustness, allowing for drug testing of malaria field samples within the derived range of initial parasitemia. The use of the WWARN procedure should allow for the inclusion of more malaria field samples in malaria drug sensitivity screens that would have otherwise been excluded due to low initial parasitemia levels. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  6. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  7. Network Sampling with Memory: A proposal for more efficient sampling from social networks.

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M

    2012-08-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)-the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a "List" mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a "Search" mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS.

  8. Sampling of Complex Networks: A Datamining Approach

    Science.gov (United States)

    Loecher, Markus; Dohrmann, Jakob; Bauer, Gernot

    2007-03-01

    Efficient and accurate sampling of big complex networks is still an unsolved problem. As the degree distribution is one of the most commonly used attributes to characterize a network, there have been many attempts in recent papers to derive the original degree distribution from the data obtained during a traceroute- like sampling process. This talk describes a strategy for predicting the original degree of a node using the data obtained from a network by traceroute-like sampling making use of datamining techniques. Only local quantities (the sampled degree k, the redundancy of node detection r, the time of the first discovery of a node t and the distance to the sampling source d) are used as input for the datamining models. Global properties like the betweenness centrality are ignored. These local quantities are examined theoretically and in simulations to increase their value for the predictions. The accuracy of the models is discussed as a function of the number of sources used in the sampling process and the underlying topology of the network. The purpose of this work is to introduce the techniques of the relatively young field of datamining to the discussion on network sampling.

  9. Network reconstruction via density sampling

    CERN Document Server

    Squartini, Tiziano; Gabrielli, Andrea; Garlaschelli, Diego

    2016-01-01

    Reconstructing weighted networks from partial information is necessary in many important circumstances, e.g. for a correct estimation of systemic risk. It has been shown that, in order to achieve an accurate reconstruction, it is crucial to reliably replicate the empirical degree sequence, which is however unknown in many realistic situations. More recently, it has been found that the knowledge of the degree sequence can be replaced by the knowledge of the strength sequence, which is typically accessible, complemented by that of the total number of links, thus considerably relaxing the observational requirements. Here we further relax these requirements and devise a procedure valid when even the the total number of links is unavailable. We assume that, apart from the heterogeneity induced by the degree sequence itself, the network is homogeneous, so that its link density can be estimated by sampling subsets of nodes with representative density. We show that the best way of sampling nodes is the random selecti...

  10. Social network sampling using spanning trees

    Science.gov (United States)

    Jalali, Zeinab S.; Rezvanian, Alireza; Meybodi, Mohammad Reza

    2016-12-01

    Due to the large scales and limitations in accessing most online social networks, it is hard or infeasible to directly access them in a reasonable amount of time for studying and analysis. Hence, network sampling has emerged as a suitable technique to study and analyze real networks. The main goal of sampling online social networks is constructing a small scale sampled network which preserves the most important properties of the original network. In this paper, we propose two sampling algorithms for sampling online social networks using spanning trees. The first proposed sampling algorithm finds several spanning trees from randomly chosen starting nodes; then the edges in these spanning trees are ranked according to the number of times that each edge has appeared in the set of found spanning trees in the given network. The sampled network is then constructed as a sub-graph of the original network which contains a fraction of nodes that are incident on highly ranked edges. In order to avoid traversing the entire network, the second sampling algorithm is proposed using partial spanning trees. The second sampling algorithm is similar to the first algorithm except that it uses partial spanning trees. Several experiments are conducted to examine the performance of the proposed sampling algorithms on well-known real networks. The obtained results in comparison with other popular sampling methods demonstrate the efficiency of the proposed sampling algorithms in terms of Kolmogorov-Smirnov distance (KSD), skew divergence distance (SDD) and normalized distance (ND).

  11. Sampling of temporal networks: Methods and biases

    Science.gov (United States)

    Rocha, Luis E. C.; Masuda, Naoki; Holme, Petter

    2017-11-01

    Temporal networks have been increasingly used to model a diversity of systems that evolve in time; for example, human contact structures over which dynamic processes such as epidemics take place. A fundamental aspect of real-life networks is that they are sampled within temporal and spatial frames. Furthermore, one might wish to subsample networks to reduce their size for better visualization or to perform computationally intensive simulations. The sampling method may affect the network structure and thus caution is necessary to generalize results based on samples. In this paper, we study four sampling strategies applied to a variety of real-life temporal networks. We quantify the biases generated by each sampling strategy on a number of relevant statistics such as link activity, temporal paths and epidemic spread. We find that some biases are common in a variety of networks and statistics, but one strategy, uniform sampling of nodes, shows improved performance in most scenarios. Given the particularities of temporal network data and the variety of network structures, we recommend that the choice of sampling methods be problem oriented to minimize the potential biases for the specific research questions on hand. Our results help researchers to better design network data collection protocols and to understand the limitations of sampled temporal network data.

  12. Fast Moving Sampling Designs in Temporal Networks

    CERN Document Server

    Thompson, Steven K

    2015-01-01

    In a study related to this one I set up a temporal network simulation environment for evaluating network intervention strategies. A network intervention strategy consists of a sampling design to select nodes in the network. An intervention is applied to nodes in the sample for the purpose of changing the wider network in some desired way. The network intervention strategies can represent natural agents such as viruses that spread in the network, programs to prevent or reduce the virus spread, and the agency of individual nodes, such as people, in forming and dissolving the links that create, maintain or change the network. The present paper examines idealized versions of the sampling designs used to that study. The purpose is to better understand the natural and human network designs in real situations and to provide a simple inference of design-based properties that in turn measure properties of the time-changing network. The designs use link tracing and sometimes other probabilistic procedures to add units ...

  13. Sampling Criterion for EMC Near Field Measurements

    DEFF Research Database (Denmark)

    Franek, Ondrej; Sørensen, Morten; Ebert, Hans

    2012-01-01

    An alternative, quasi-empirical sampling criterion for EMC near field measurements intended for close coupling investigations is proposed. The criterion is based on maximum error caused by sub-optimal sampling of near fields in the vicinity of an elementary dipole, which is suggested as a worst......-case representative of a signal trace on a typical printed circuit board. It has been found that the sampling density derived in this way is in fact very similar to that given by the antenna near field sampling theorem, if an error less than 1 dB is required. The principal advantage of the proposed formulation is its...

  14. Adaptive Importance Sampling Simulation of Queueing Networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.; Rubinstein, N.; Rubinstein, Reuven Y.

    2000-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (overflow) probabilities in Jackson queueing networks using importance sampling. The method differs in two ways from methods discussed in most earlier literature: the change of measure is state-dependent, i.e., it is a

  15. Field Sampling from a Segmented Image

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-06-01

    Full Text Available Image Debba, Stein, van der Meer, Carranza, Lucieer Objective Study Site Methods The ICM Algorithm Sampling Per Category Sample Size Per Category Fitness Function Per Category Simulated Annealing Per Category Results Experiment Case... Study Conclusions Field Sampling from a Segmented Image P. Debba1 A. Stein2 F.D. van der Meer2 E.J.M. Carranza2 A. Lucieer3 1The Council for Scientific and Industrial Research (CSIR), Logistics and Quantitative Methods, CSIR Built Environment, P...

  16. Visual Sample Plan (VSP) - FIELDS Integration

    Energy Technology Data Exchange (ETDEWEB)

    Pulsipher, Brent A.; Wilson, John E.; Gilbert, Richard O.; Hassig, Nancy L.; Carlson, Deborah K.; Bing-Canar, John; Cooper, Brian; Roth, Chuck

    2003-04-19

    Two software packages, VSP 2.1 and FIELDS 3.5, are being used by environmental scientists to plan the number and type of samples required to meet project objectives, display those samples on maps, query a database of past sample results, produce spatial models of the data, and analyze the data in order to arrive at defensible decisions. VSP 2.0 is an interactive tool to calculate optimal sample size and optimal sample location based on user goals, risk tolerance, and variability in the environment and in lab methods. FIELDS 3.0 is a set of tools to explore the sample results in a variety of ways to make defensible decisions with quantified levels of risk and uncertainty. However, FIELDS 3.0 has a small sample design module. VSP 2.0, on the other hand, has over 20 sampling goals, allowing the user to input site-specific assumptions such as non-normality of sample results, separate variability between field and laboratory measurements, make two-sample comparisons, perform confidence interval estimation, use sequential search sampling methods, and much more. Over 1,000 copies of VSP are in use today. FIELDS is used in nine of the ten U.S. EPA regions, by state regulatory agencies, and most recently by several international countries. Both software packages have been peer-reviewed, enjoy broad usage, and have been accepted by regulatory agencies as well as site project managers as key tools to help collect data and make environmental cleanup decisions. Recently, the two software packages were integrated, allowing the user to take advantage of the many design options of VSP, and the analysis and modeling options of FIELDS. The transition between the two is simple for the user – VSP can be called from within FIELDS, automatically passing a map to VSP and automatically retrieving sample locations and design information when the user returns to FIELDS. This paper will describe the integration, give a demonstration of the integrated package, and give users download

  17. Fields, networks, creativity and evolution.

    Science.gov (United States)

    van der Hammen, L

    2000-01-01

    Organisms constitute wholes as a result of a network of organizing relations between the parts. In animals, this network has a morphological as well as a psychological aspect, and it regulates morphogenesis as well as behaviour. It is pointed out that closed networks of higher order, that have the characteristics of single organisms (communities of ants, termites and bees, cases of symbiosis, and perhaps even the Earth), could also possess that regulating aspect. In the case of humans, the network can be associated with creativity and the structure of knowledge. Individuation (as defined in Jung's psychology) refers to the assimilation of the network into consciousness. The theory developed in the present paper could give rise to a multi-disciplinary approach to the study of life.

  18. Mean field interaction in biochemical reaction networks

    KAUST Repository

    Tembine, Hamidou

    2011-09-01

    In this paper we establish a relationship between chemical dynamics and mean field game dynamics. We show that chemical reaction networks can be studied using noisy mean field limits. We provide deterministic, noisy and switching mean field limits and illustrate them with numerical examples. © 2011 IEEE.

  19. Digital Curation of Earth Science Samples Starts in the Field

    Science.gov (United States)

    Lehnert, K. A.; Hsu, L.; Song, L.; Carter, M. R.

    2014-12-01

    Collection of physical samples in the field is an essential part of research in the Earth Sciences. Samples provide a basis for progress across many disciplines, from the study of global climate change now and over the Earth's history, to present and past biogeochemical cycles, to magmatic processes and mantle dynamics. The types of samples, methods of collection, and scope and scale of sampling campaigns are highly diverse, ranging from large-scale programs to drill rock and sediment cores on land, in lakes, and in the ocean, to environmental observation networks with continuous sampling, to single investigator or small team expeditions to remote areas around the globe or trips to local outcrops. Cyberinfrastructure for sample-related fieldwork needs to cater to the different needs of these diverse sampling activities, aligning with specific workflows, regional constraints such as connectivity or climate, and processing of samples. In general, digital tools should assist with capture and management of metadata about the sampling process (location, time, method) and the sample itself (type, dimension, context, images, etc.), management of the physical objects (e.g., sample labels with QR codes), and the seamless transfer of sample metadata to data systems and software relevant to the post-sampling data acquisition, data processing, and sample curation. In order to optimize CI capabilities for samples, tools and workflows need to adopt community-based standards and best practices for sample metadata, classification, identification and registration. This presentation will provide an overview and updates of several ongoing efforts that are relevant to the development of standards for digital sample management: the ODM2 project that has generated an information model for spatially-discrete, feature-based earth observations resulting from in-situ sensors and environmental samples, aligned with OGC's Observation & Measurements model (Horsburgh et al, AGU FM 2014

  20. Mars Analogue Field Research and Sample Analysis

    Science.gov (United States)

    Foing, Bernard H.

    2016-07-01

    We describe results from the data analysis from a series of field research campaigns (ILEWG EuroMoonMars campaigns 2009 to 2016) in the Utah desert and in other extreme environments (Iceland, Eifel, La Reunion) relevant to habitability and astrobiology in Mars environments, and in order to help in the interpretation of Mars missions measurements from orbit (MEX, MRO) or from the surface (MER, MSL). We discuss results relevant to the scientific study of the habitability factors influenced by the properties of dust, organics, water history and the diagnostics and characterisation of microbial life. We also discuss perspectives for the preparation of future lander and sample return missions. We deployed at Mars Desert Research station, Utah, a suite of instruments and techniques including sample collection, context imaging from remote to local and microscale, drilling, spectrometers and life sensors. We analyzed how geological and geochemical evolution affected local parameters (mineralogy, organics content, environment variations) and the habitability and signature of organics and biota. We find high diversity in the composition of soil samples even when collected in close proximity, the low abundances of detectable PAHs and amino acids and the presence of biota of all three domains of life with significant heterogeneity. An extraordinary variety of putative extremophiles was observed. A dominant factor seems to be soil porosity and lower clay-sized particle content. A protocol was developed for sterile sampling, contamination issues, and the diagnostics of biodiversity via PCR and DGGE analysis in soils and rocks samples. We compare campaign results from 2009-2013 campaigns in Utah and other sites to new measurements concerning: the comparison between remote sensing and in-situ measurements; the study of minerals; the detection of organics and signs of life.

  1. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Science.gov (United States)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-07-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  2. Mean field methods for cortical network dynamics

    DEFF Research Database (Denmark)

    Hertz, J.; Lerchner, Alexander; Ahmadi, M.

    2004-01-01

    We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate- and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases...... with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high- conductance state observed experimentally in visual...

  3. Comparison of large networks with sub-sampling strategies

    Science.gov (United States)

    Ali, Waqar; Wegner, Anatol E.; Gaunt, Robert E.; Deane, Charlotte M.; Reinert, Gesine

    2016-07-01

    Networks are routinely used to represent large data sets, making the comparison of networks a tantalizing research question in many areas. Techniques for such analysis vary from simply comparing network summary statistics to sophisticated but computationally expensive alignment-based approaches. Most existing methods either do not generalize well to different types of networks or do not provide a quantitative similarity score between networks. In contrast, alignment-free topology based network similarity scores empower us to analyse large sets of networks containing different types and sizes of data. Netdis is such a score that defines network similarity through the counts of small sub-graphs in the local neighbourhood of all nodes. Here, we introduce a sub-sampling procedure based on neighbourhoods which links naturally with the framework of network comparisons through local neighbourhood comparisons. Our theoretical arguments justify basing the Netdis statistic on a sample of similar-sized neighbourhoods. Our tests on empirical and synthetic datasets indicate that often only 10% of the neighbourhoods of a network suffice for optimal performance, leading to a drastic reduction in computational requirements. The sampling procedure is applicable even when only a small sample of the network is known, and thus provides a novel tool for network comparison of very large and potentially incomplete datasets.

  4. Sampling from complex networks with high community structures.

    Science.gov (United States)

    Salehi, Mostafa; Rabiee, Hamid R; Rajabi, Arezo

    2012-06-01

    In this paper, we propose a novel link-tracing sampling algorithm, based on the concepts from PageRank vectors, to sample from networks with high community structures. Our method has two phases; (1) Sampling the closest nodes to the initial nodes by approximating personalized PageRank vectors and (2) Jumping to a new community by using PageRank vectors and unknown neighbors. Empirical studies on several synthetic and real-world networks show that the proposed method improves the performance of network sampling compared to the popular link-based sampling methods in terms of accuracy and visited communities.

  5. Sample-Starved Large Scale Network Analysis

    Science.gov (United States)

    2016-05-05

    Applications to materials science 2.1 Foundational principles for large scale inference on structure of covariance We developed general principles for...concise but accessible format. These principles are applicable to large-scale complex network applications arising genomics , connectomics, eco-informatics...available to estimate or detect patterns in the matrix. 15. SUBJECT TERMS multivariate dependency structure multivariate spatio-temporal prediction

  6. Importance sampling in rate-sharing networks

    NARCIS (Netherlands)

    Lieshout, P.; Mandjes, M.

    2008-01-01

    We consider a network supporting elastic traffic, where the service capacity is shared among the various classes according to an alpha-fair sharing policy. Assuming Poisson arrivals and exponentially distributed service requirements for each class, the dynamics of the user population may be

  7. The African Field Epidemiology Network--networking for effective field epidemiology capacity building and service delivery.

    Science.gov (United States)

    Gitta, Sheba Nakacubo; Mukanga, David; Babirye, Rebecca; Dahlke, Melissa; Tshimanga, Mufuta; Nsubuga, Peter

    2011-01-01

    Networks are a catalyst for promoting common goals and objectives of their membership. Public Health networks in Africa are crucial, because of the severe resource limitations that nations face in dealing with priority public health problems. For a long time, networks have existed on the continent and globally, but many of these are disease-specific with a narrow scope. The African Field Epidemiology Network (AFENET) is a public health network established in 2005 as a non-profit networking alliance of Field Epidemiology and Laboratory Training Programs (FELTPs) and Field Epidemiology Training Programs (FETPs) in Africa. AFENET is dedicated to helping ministries of health in Africa build strong, effective and sustainable programs and capacity to improve public health systems by partnering with global public health experts. The Network's goal is to strengthen field epidemiology and public health laboratory capacity to contribute effectively to addressing epidemics and other major public health problems in Africa. AFENET currently networks 12 FELTPs and FETPs in sub-Saharan Africa with operations in 20 countries. AFENET has a unique tripartite working relationship with government technocrats from human health and animal sectors, academicians from partner universities, and development partners, presenting the Network with a distinct vantage point. Through the Network, African nations are making strides in strengthening their health systems. Members are able to: leverage resources to support field epidemiology and public health laboratory training and service delivery notably in the area of outbreak investigation and response as well as disease surveillance; by-pass government bureaucracies that often hinder and frustrate development partners; and consolidate efforts of different partners channelled through the FELTPs by networking graduates through alumni associations and calling on them to offer technical support in various public health capacities as the need arises

  8. The African Field Epidemiology Network-Networking for effective field epidemiology capacity building and service delivery

    Science.gov (United States)

    Gitta, Sheba Nakacubo; Mukanga, David; Babirye, Rebecca; Dahlke, Melissa; Tshimanga, Mufuta; Nsubuga, Peter

    2011-01-01

    Networks are a catalyst for promoting common goals and objectives of their membership. Public Health networks in Africa are crucial, because of the severe resource limitations that nations face in dealing with priority public health problems. For a long time, networks have existed on the continent and globally, but many of these are disease-specific with a narrow scope. The African Field Epidemiology Network (AFENET) is a public health network established in 2005 as a non-profit networking alliance of Field Epidemiology and Laboratory Training Programs (FELTPs) and Field Epidemiology Training Programs (FETPs) in Africa. AFENET is dedicated to helping ministries of health in Africa build strong, effective and sustainable programs and capacity to improve public health systems by partnering with global public health experts. The Network's goal is to strengthen field epidemiology and public health laboratory capacity to contribute effectively to addressing epidemics and other major public health problems in Africa. AFENET currently networks 12 FELTPs and FETPs in sub-Saharan Africa with operations in 20 countries. AFENET has a unique tripartite working relationship with government technocrats from human health and animal sectors, academicians from partner universities, and development partners, presenting the Network with a distinct vantage point. Through the Network, African nations are making strides in strengthening their health systems. Members are able to: leverage resources to support field epidemiology and public health laboratory training and service delivery notably in the area of outbreak investigation and response as well as disease surveillance; by-pass government bureaucracies that often hinder and frustrate development partners; and consolidate efforts of different partners channelled through the FELTPs by networking graduates through alumni associations and calling on them to offer technical support in various public health capacities as the need arises

  9. Mean field games for cognitive radio networks

    KAUST Repository

    Tembine, Hamidou

    2012-06-01

    In this paper we study mobility effect and power saving in cognitive radio networks using mean field games. We consider two types of users: primary and secondary users. When active, each secondary transmitter-receiver uses carrier sensing and is subject to long-term energy constraint. We formulate the interaction between primary user and large number of secondary users as an hierarchical mean field game. In contrast to the classical large-scale approaches based on stochastic geometry, percolation theory and large random matrices, the proposed mean field framework allows one to describe the evolution of the density distribution and the associated performance metrics using coupled partial differential equations. We provide explicit formulas and algorithmic power management for both primary and secondary users. A complete characterization of the optimal distribution of energy and probability of success is given.

  10. A nonparametric significance test for sampled networks.

    Science.gov (United States)

    Elliott, Andrew; Leicht, Elizabeth; Whitmore, Alan; Reinert, Gesine; Reed-Tsochas, Felix

    2018-01-01

    Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork; in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network. The software used in this paper are available for download at https://sites.google.com/site/elliottande/. The software is written in Python and uses the NetworkX library. ande.elliott@gmail.com or felix.reed-tsochas@sbs.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  11. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  12. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  13. New Survey Questions and Estimators for Network Clustering with Respondent-Driven Sampling Data

    CERN Document Server

    Verdery, Ashton M; Siripong, Nalyn; Abdesselam, Kahina; Bauldry, Shawn

    2016-01-01

    Respondent-driven sampling (RDS) is a popular method for sampling hard-to-survey populations that leverages social network connections through peer recruitment. While RDS is most frequently applied to estimate the prevalence of infections and risk behaviors of interest to public health, like HIV/AIDS or condom use, it is rarely used to draw inferences about the structural properties of social networks among such populations because it does not typically collect the necessary data. Drawing on recent advances in computer science, we introduce a set of data collection instruments and RDS estimators for network clustering, an important topological property that has been linked to a network's potential for diffusion of information, disease, and health behaviors. We use simulations to explore how these estimators, originally developed for random walk samples of computer networks, perform when applied to RDS samples with characteristics encountered in realistic field settings that depart from random walks. In partic...

  14. Field sampling scheme optimization using simulated annealing

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-10-01

    Full Text Available (1993); Salisbury et al. (1991); Van der Meer (2004). Various mapping of minerals using hyperspectral data can be found in Crósta et al. (1998); Kruse & Boardman (1997); Rowan et al. (2000); Sabins (1999); Vaughan et al. (2003). Surface sampling... features of these minerals. Thorough discussions on absorption features of hy- drothermal alteration minerals can be found in Clark (1999); Hapke (1993); Salisbury et al. (1991); Van der Meer (2004). Various mapping of minerals using hyperspectral data...

  15. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  16. Exploring phylogenetic hypotheses via Gibbs sampling on evolutionary networks

    Directory of Open Access Journals (Sweden)

    Yun Yu

    2016-11-01

    Full Text Available Abstract Background Phylogenetic networks are leaf-labeled graphs used to model and display complex evolutionary relationships that do not fit a single tree. There are two classes of phylogenetic networks: Data-display networks and evolutionary networks. While data-display networks are very commonly used to explore data, they are not amenable to incorporating probabilistic models of gene and genome evolution. Evolutionary networks, on the other hand, can accommodate such probabilistic models, but they are not commonly used for exploration. Results In this work, we show how to turn evolutionary networks into a tool for statistical exploration of phylogenetic hypotheses via a novel application of Gibbs sampling. We demonstrate the utility of our work on two recently available genomic data sets, one from a group of mosquitos and the other from a group of modern birds. We demonstrate that our method allows the use of evolutionary networks not only for explicit modeling of reticulate evolutionary histories, but also for exploring conflicting treelike hypotheses. We further demonstrate the performance of the method on simulated data sets, where the true evolutionary histories are known. Conclusion We introduce an approach to explore phylogenetic hypotheses over evolutionary phylogenetic networks using Gibbs sampling. The hypotheses could involve reticulate and non-reticulate evolutionary processes simultaneously as we illustrate on mosquito and modern bird genomic data sets.

  17. Learning algorithms for feedforward networks based on finite samples

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V.; Protopopescu, V.; Mann, R.C.; Oblow, E.M.; Iyengar, S.S.

    1994-09-01

    Two classes of convergent algorithms for learning continuous functions (and also regression functions) that are represented by feedforward networks, are discussed. The first class of algorithms, applicable to networks with unknown weights located only in the output layer, is obtained by utilizing the potential function methods of Aizerman et al. The second class, applicable to general feedforward networks, is obtained by utilizing the classical Robbins-Monro style stochastic approximation methods. Conditions relating the sample sizes to the error bounds are derived for both classes of algorithms using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.

  18. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Directory of Open Access Journals (Sweden)

    Ashton M Verdery

    Full Text Available This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS. Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  19. Network Structure and Biased Variance Estimation in Respondent Driven Sampling.

    Science.gov (United States)

    Verdery, Ashton M; Mouw, Ted; Bauldry, Shawn; Mucha, Peter J

    2015-01-01

    This paper explores bias in the estimation of sampling variance in Respondent Driven Sampling (RDS). Prior methodological work on RDS has focused on its problematic assumptions and the biases and inefficiencies of its estimators of the population mean. Nonetheless, researchers have given only slight attention to the topic of estimating sampling variance in RDS, despite the importance of variance estimation for the construction of confidence intervals and hypothesis tests. In this paper, we show that the estimators of RDS sampling variance rely on a critical assumption that the network is First Order Markov (FOM) with respect to the dependent variable of interest. We demonstrate, through intuitive examples, mathematical generalizations, and computational experiments that current RDS variance estimators will always underestimate the population sampling variance of RDS in empirical networks that do not conform to the FOM assumption. Analysis of 215 observed university and school networks from Facebook and Add Health indicates that the FOM assumption is violated in every empirical network we analyze, and that these violations lead to substantially biased RDS estimators of sampling variance. We propose and test two alternative variance estimators that show some promise for reducing biases, but which also illustrate the limits of estimating sampling variance with only partial information on the underlying population social network.

  20. Improving Sample Estimate Reliability and Validity with Linked Ego Networks

    CERN Document Server

    Lu, Xin

    2012-01-01

    Respondent-driven sampling (RDS) is currently widely used in public health, especially for the study of hard-to-access populations such as injecting drug users and men who have sex with men. The method works like a snowball sample but can, given that some assumptions are met, generate unbiased population estimates. However, recent studies have shown that traditional RDS estimators are likely to generate large variance and estimate error. To improve the performance of traditional estimators, we propose a method to generate estimates with ego network data collected by RDS. By simulating RDS processes on an empirical human social network with known population characteristics, we have shown that the precision of estimates on the composition of network link types is greatly improved with ego network data. The proposed estimator for population characteristics shows superior advantage over traditional RDS estimators, and most importantly, the new method exhibits strong robustness to the recruitment preference of res...

  1. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  2. The African Field Epidemiology Network - Networking for effective ...

    African Journals Online (AJOL)

    Networks are a catalyst for promoting common goals and objectives of their membership. Public Health networks in Africa are crucial, because of the severe resource limitations that nations face in dealing with priority public health problems. For a long time, networks have existed on the continent and globally, but many of ...

  3. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  4. Random Network Coding over Composite Fields

    DEFF Research Database (Denmark)

    Geil, Olav; Lucani Rötter, Daniel Enrique

    2017-01-01

    Random network coding is a method that achieves multicast capacity asymptotically for general networks [1, 7]. In this approach, vertices in the network randomly and linearly combine incoming information in a distributed manner before forwarding it through their outgoing edges. To ensure success...

  5. Parallel importance sampling in conditional linear Gaussian networks

    DEFF Research Database (Denmark)

    Salmerón, Antonio; Ramos-López, Darío; Borchani, Hanen

    2015-01-01

    In this paper we analyse the problem of probabilistic inference in CLG networks when evidence comes in streams. In such situations, fast and scalable algorithms, able to provide accurate responses in a short time are required. We consider the instantiation of variational inference and importance...... sampling, two well known tools for probabilistic inference, to the CLG case. The experimental results over synthetic networks show how a parallel version importance sampling, and more precisely evidence weighting, is a promising scheme, as it is accurate and scales up with respect to available computing...

  6. On Field Size and Success Probability in Network Coding

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh; Thomsen, Casper

    2008-01-01

    Using tools from algebraic geometry and Gröbner basis theory we solve two problems in network coding. First we present a method to determine the smallest field size for which linear network coding is feasible. Second we derive improved estimates on the success probability of random linear network...

  7. NEON terrestrial field observations: designing continental scale, standardized sampling

    Science.gov (United States)

    R. H. Kao; C.M. Gibson; R. E. Gallery; C. L. Meier; D. T. Barnett; K. M. Docherty; K. K. Blevins; P. D. Travers; E. Azuaje; Y. P. Springer; K. M. Thibault; V. J. McKenzie; M. Keller; L. F. Alves; E. L. S. Hinckley; J. Parnell; D. Schimel

    2012-01-01

    Rapid changes in climate and land use and the resulting shifts in species distributions and ecosystem functions have motivated the development of the National Ecological Observatory Network (NEON). Integrating across spatial scales from ground sampling to remote sensing, NEON will provide data for users to address ecological responses to changes in climate, land use,...

  8. Direct sampling of electric-field vacuum fluctuations

    National Research Council Canada - National Science Library

    Riek, C; Seletskiy, D V; Moskalenko, A S; Schmidt, J F; Krauspe, P; Eckart, S; Eggert, S; Burkard, G; Leitenstorfer, A

    2015-01-01

    .... The ground-state electric-field variance is inversely proportional to the four-dimensional space-time volume, which we sampled electro-optically with tightly focused laser pulses lasting a few femtoseconds...

  9. Astronaut Neil Armstrong studies rock samples during geological field trip

    Science.gov (United States)

    1969-01-01

    Astronaut Neil Armstrong, commander of the Apollo 11 lunar landing mission, studies rock samples during a geological field trip to the Quitman Mountains area near the Fort Quitman ruins in far west Texas.

  10. Modelling nanofluidic field amplified sample stacking with inhomogeneous surface charge

    Science.gov (United States)

    McCallum, Christopher; Pennathur, Sumita

    2015-11-01

    Nanofluidic technology has exceptional applications as a platform for biological sample preconcentration, which will allow for an effective electronic detection method of low concentration analytes. One such preconcentration method is field amplified sample stacking, a capillary electrophoresis technique that utilizes large concentration differences to generate high electric field gradients, causing the sample of interest to form a narrow, concentrated band. Field amplified sample stacking has been shown to work well at the microscale, with models and experiments confirming expected behavior. However, nanofluidics allows for further concentration enhancement due to focusing of the sample ions toward the channel center by the electric double layer. We have developed a two-dimensional model that can be used for both micro- and nanofluidics, fully accounting for the electric double layer. This model has been used to investigate even more complex physics such as the role of inhomogeneous surface charge.

  11. Sample EP Flow Analysis of Severely Damaged Networks

    Energy Technology Data Exchange (ETDEWEB)

    Werley, Kenneth Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCown, Andrew William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-12

    These are slides for a presentation at the working group meeting of the WESC SREMP Software Product Integration Team on sample EP flow analysis of severely damaged networks. The following topics are covered: ERCOT EP Transmission Model; Zoomed in to Houston and Overlaying StreetAtlas; EMPACT Solve/Dispatch/Shedding Options; QACS BaseCase Power Flow Solution; 3 Substation Contingency; Gen. & Load/100 Optimal Dispatch; Dispatch Results; Shed Load for Low V; Network Damage Summary; Estimated Service Areas (Potential); Estimated Outage Areas (potential).

  12. Neural network structure for navigation using potential fields

    Science.gov (United States)

    Plumer, Edward S.

    1992-01-01

    A hybrid-network method for obstacle avoidance in the truck-backing system of D. Nguyen and B. Widrow (1989) is presented. A neural network technique for vehicle navigation and control in the presence of obstacles has been developed. A potential function which peaks at the surface of obstacles and has its minimum at the proper vehicle destination is computed using a network structure. The field is guaranteed not to have spurious local minima and does not have the property of flattening-out far from the goal. A feedforward neural network is used to control the steering of the vehicle using local field information. The network is trained in an obstacle-free space to follow the negative gradient of the field, after which the network is able to control and navigate the truck to its target destination in a space of obstacles which may be stationary or movable.

  13. Random field Ising model and community structure in complex networks

    Science.gov (United States)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  14. Magnetostatic modes in ferromagnetic samples with inhomogeneous internal fields

    Science.gov (United States)

    Arias, Rodrigo

    2015-03-01

    Magnetostatic modes in ferromagnetic samples are very well characterized and understood in samples with uniform internal magnetic fields. More recently interest has shifted to the study of magnetization modes in ferromagnetic samples with inhomogeneous internal fields. The present work shows that under the magnetostatic approximation and for samples of arbitrary shape and/or arbitrary inhomogeneous internal magnetic fields the modes can be classified as elliptic or hyperbolic, and their associated frequency spectrum can be delimited. This results from the analysis of the character of the second order partial differential equation for the magnetostatic potential under these general conditions. In general, a sample with an inhomogeneous internal field and at a given frequency, may have regions of elliptic and hyperbolic character separated by a boundary. In the elliptic regions the magnetostatic modes have a smooth monotonic character (generally decaying form the surfaces (a ``tunneling'' behavior)) and in hyperbolic regions an oscillatory wave-like character. A simple local criterion distinguishes hyperbolic from elliptic regions: the sign of a susceptibility parameter. This study shows that one may control to some extent magnetostatic modes via external fields or geometry. R.E.A. acknowledges Financiamiento Basal para Centros Cientificos y Tecnologicos de Excelencia under Project No. FB 0807 (Chile), Grant No. ICM P10-061-F by Fondo de Innovacion para la Competitividad-MINECON, and Proyecto Fondecyt 1130192.

  15. Correlation-based similarity networks for unequally sampled data

    Science.gov (United States)

    Rehfeld, Kira; Donges, Jonathan F.; Marwan, Norbert; Kurths, Jürgen

    2010-05-01

    Complex networks present a promising and increasingly popular paradigm for the description and analysis of interactions within complex spatially extended systems in the geosciences. Typically, a network is constructed by thresholding a similarity matrix which is based on a set of time series representing the system's dynamics at different locations. In geoscientific applications such as paleoclimate records derived from ice and sediment cores or speleothems, however, researchers are inherently faced with irregularly and heterogenously sampled time series. For this type of data, standard similarity measures, e.g., Pearson correlation or mutual information, must fail. Most attention has been placed on frequency-based methods focussing on the derivation of power spectra, such as the Lomb-Scargle periodogram. In the context of paleoscientific network research correlation estimation is of high interest, but available methods require interpolation prior to analysis. Here we present a generalization of the Pearson correlation coefficient adapted to irregularly sampled time series and show that it has advantages over the standard approach. Characterizing the method in the application to model systems we further extend our scope to real world data and show that it offers new options for network research and provide novel insights into the functioning of the earth system.

  16. Graph animals, subgraph sampling and motif search in large networks

    CERN Document Server

    Baskerville, Kim; Paczuski, Maya

    2007-01-01

    We generalize a sampling algorithm for lattice animals (connected clusters on a regular lattice) to a Monte Carlo algorithm for `graph animals', i.e. connected subgraphs in arbitrary networks. As with the algorithm in [N. Kashtan et al., Bioinformatics 20, 1746 (2004)], it provides a weighted sample, but the computation of the weights is much faster (linear in the size of subgraphs, instead of super-exponential). This allows subgraphs with up to ten or more nodes to be sampled with very high statistics, from arbitrarily large networks. Using this together with a heuristic algorithm for rapidly classifying isomorphic graphs, we present results for two protein interaction networks obtained using the TAP high throughput method: one of Escherichia coli with 230 nodes and 695 links, and one for yeast (Saccharomyces cerevisiae) with roughly ten times more nodes and links. We find in both cases that most connected subgraphs are strong motifs (Z-scores >10) or anti-motifs (Z-scores <-10) when the null model is the...

  17. Low and High-Frequency Field Potentials of Cortical Networks ...

    Science.gov (United States)

    Neural networks grown on microelectrode arrays (MEAs) have become an important, high content in vitro assay for assessing neuronal function. MEA experiments typically examine high- frequency (HF) (>200 Hz) spikes, and bursts which can be used to discriminate between different pharmacological agents/chemicals. However, normal brain activity is additionally composed of integrated low-frequency (0.5-100 Hz) field potentials (LFPs) which are filtered out of MEA recordings. The objective of this study was to characterize the relationship between HF and LFP neural network signals, and to assess the relative sensitivity of LFPs to selected neurotoxicants. Rat primary cortical cultures were grown on glass, single-well MEA chips. Spontaneous activity was sampled at 25 kHz and recorded (5 min) (Multi-Channel Systems) from mature networks (14 days in vitro). HF (spike, mean firing rate, MFR) and LF (power spectrum, amplitude) components were extracted from each network and served as its baseline (BL). Next, each chip was treated with either 1) a positive control, bicuculline (BIC, 25μM) or domoic acid (DA, 0.3μM), 2) or a negative control, acetaminophen (ACE, 100μM) or glyphosate (GLY, 100μM), 3) a solvent control (H2O or DMSO:EtOH), or 4) a neurotoxicant, (carbaryl, CAR 5, 30μM ; lindane, LIN 1, 10μM; permethrin, PERM 25, 50μM; triadimefon, TRI 5, 65μM). Post treatment, 5 mins of spontaneous activity was recorded and analyzed. As expected posit

  18. Research collaboration in groups and networks: differences across academic fields.

    Science.gov (United States)

    Kyvik, Svein; Reymert, Ingvild

    2017-01-01

    The purpose of this paper is to give a macro-picture of collaboration in research groups and networks across all academic fields in Norwegian research universities, and to examine the relative importance of membership in groups and networks for individual publication output. To our knowledge, this is a new approach, which may provide valuable information on collaborative patterns in a particular national system, but of clear relevance to other national university systems. At the system level, conducting research in groups and networks are equally important, but there are large differences between academic fields. The research group is clearly most important in the field of medicine and health, while undertaking research in an international network is most important in the natural sciences. Membership in a research group and active participation in international networks are likely to enhance publication productivity and the quality of research.

  19. Long-Term Ecological Monitoring Field Sampling Plan for 2007

    Energy Technology Data Exchange (ETDEWEB)

    T. Haney

    2007-07-31

    This field sampling plan describes the field investigations planned for the Long-Term Ecological Monitoring Project at the Idaho National Laboratory Site in 2007. This plan and the Quality Assurance Project Plan for Waste Area Groups 1, 2, 3, 4, 5, 6, 7, 10, and Removal Actions constitute the sampling and analysis plan supporting long-term ecological monitoring sampling in 2007. The data collected under this plan will become part of the long-term ecological monitoring data set that is being collected annually. The data will be used t determine the requirements for the subsequent long-term ecological monitoring. This plan guides the 2007 investigations, including sampling, quality assurance, quality control, analytical procedures, and data management. As such, this plan will help to ensure that the resulting monitoring data will be scientifically valid, defensible, and of known and acceptable quality.

  20. Predicting local field potentials with recurrent neural networks.

    Science.gov (United States)

    Kim, Louis; Harer, Jacob; Rangamani, Akshay; Moran, James; Parks, Philip D; Widge, Alik; Eskandar, Emad; Dougherty, Darin; Chin, Sang Peter

    2016-08-01

    We present a Recurrent Neural Network using LSTM (Long Short Term Memory) that is capable of modeling and predicting Local Field Potentials. We train and test the network on real data recorded from epilepsy patients. We construct networks that predict multi-channel LFPs for 1, 10, and 100 milliseconds forward in time. Our results show that prediction using LSTM outperforms regression when predicting 10 and 100 millisecond forward in time.

  1. Astronauts Armstrong and Aldrin study rock samples during field trip

    Science.gov (United States)

    1969-01-01

    Astronaut Neil Armstrong, commander of the Apollo 11 lunar landing mission, and Astronaut Edwin Aldrin, Lunar module pilot for Apollo 11, study rock samples during a geological field trip to the Quitman Mountains area near the Fort Quitman ruins in far west Texas.

  2. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  3. The Distributed Unattended Networked Sensors Field Experiment

    National Research Council Canada - National Science Library

    Sim, Leng

    2000-01-01

    .... Army Research Laboratory (ARL) regularly conducts field experiments to demonstrate and evaluate real-time performance of the acoustic sensor test bed and to collect signature data of new targets for an ARL acoustic and seismic database...

  4. The wireshark field guide analyzing and troubleshooting network traffic

    CERN Document Server

    Shimonski, Robert

    2013-01-01

    The Wireshark Field Guide provides hackers, pen testers, and network administrators with practical guidance on capturing and interactively browsing computer network traffic. Wireshark is the world's foremost network protocol analyzer, with a rich feature set that includes deep inspection of hundreds of protocols, live capture, offline analysis and many other features. The Wireshark Field Guide covers the installation, configuration and use of this powerful multi-platform tool. The book give readers the hands-on skills to be more productive with Wireshark as they drill

  5. Daily temporal structure in African savanna flower visitation networks and consequences for network sampling.

    Science.gov (United States)

    Baldock, Katherine C R; Memmott, Jane; Ruiz-Guajardo, Juan Carlos; Roze, Denis; Stone, Graham N

    2011-03-01

    Ecological interaction networks are a valuable approach to understanding plant-pollinator interactions at the community level. Highly structured daily activity patterns are a feature of the biology of many flower visitors, particularly provisioning female bees, which often visit different floral sources at different times. Such temporal structure implies that presence/absence and relative abundance of specific flower-visitor interactions (links) in interaction networks may be highly sensitive to the daily timing of data collection. Further, relative timing of interactions is central to their possible role in competition or facilitation of seed set among coflowering plants sharing pollinators. To date, however, no study has examined the network impacts of daily temporal variation in visitor activity at a community scale. Here we use temporally structured sampling to examine the consequences of daily activity patterns upon network properties using fully quantified flower-visitor interaction data for a Kenyan savanna habitat. Interactions were sampled at four sequential three-hour time intervals between 06:00 and 18:00, across multiple seasonal time points for two sampling sites. In all data sets the richness and relative abundance of links depended critically on when during the day visitation was observed. Permutation-based null modeling revealed significant temporal structure across daily time intervals at three of the four seasonal time points, driven primarily by patterns in bee activity. This sensitivity of network structure shows the need to consider daily time in network sampling design, both to maximize the probability of sampling links relevant to plant reproductive success and to facilitate appropriate interpretation of interspecific relationships. Our data also suggest that daily structuring at a community level could reduce indirect competitive interactions when coflowering plants share pollinators, as is commonly observed during flowering in highly

  6. Student Learning Networks on Residential Field Courses: Does Size Matter?

    Science.gov (United States)

    Langan, A. Mark; Cullen, W. Rod; Shuker, David M.

    2008-01-01

    This article describes learner and tutor reports of a learning network that formed during the completion of investigative projects on a residential field course. Staff and students recorded project-related interactions, who they were with and how long they lasted over four phases during the field course. An enquiry based learning format challenged…

  7. Waferscale assembly of Field-Aligned nanotube Networks (FANs)

    DEFF Research Database (Denmark)

    Dimaki, Maria; Bøggild, Peter

    2006-01-01

    frequencies of the electrical field used to attract the nanotubes to the electrodes. Preliminary data of response to visible light irradiation as well as changes in the humidity indicate that the field aligned networks could be used as sensor components that may well integrate with CMOS due to mild assembly...

  8. Mean Field Theory for Nonequilibrium Network Reconstruction

    DEFF Research Database (Denmark)

    Roudi, Yasser; Hertz, John

    2011-01-01

    , as an example, the question of recovering the interactions in an asymmetrically-coupled, synchronously-updated SK model. We derive an exact iterative inversion algorithm and develop efficient approximations based on dynamical mean-field and TAP equations that express the interactions in terms of equal...

  9. Serum Dried Samples to Detect Dengue Antibodies: A Field Study

    Directory of Open Access Journals (Sweden)

    Angelica Maldonado-Rodríguez

    2017-01-01

    Full Text Available Background. Dried blood and serum samples are useful resources for detecting antiviral antibodies. The conditions for elution of the sample need to be optimized for each disease. Dengue is a widespread disease in Mexico which requires continuous surveillance. In this study, we standardized and validated a protocol for the specific detection of dengue antibodies from dried serum spots (DSSs. Methods. Paired serum and DSS samples from 66 suspected cases of dengue were collected in a clinic in Veracruz, Mexico. Samples were sent to our laboratory, where the conditions for optimal elution of DSSs were established. The presence of anti-dengue antibodies was determined in the paired samples. Results. DSS elution conditions were standardized as follows: 1 h at 4°C in 200 µl of DNase-, RNase-, and protease-free PBS (1x. The optimal volume of DSS eluate to be used in the IgG assay was 40 µl. Sensitivity of 94%, specificity of 93.3%, and kappa concordance of 0.87 were obtained when comparing the antidengue reactivity between DSSs and serum samples. Conclusion. DSS samples are useful for detecting anti-dengue IgG antibodies in the field.

  10. Optimal sampling and sample preparation for NIR-based prediction of field scale soil properties

    Science.gov (United States)

    Knadel, Maria; Peng, Yi; Schelde, Kirsten; Thomsen, Anton; Deng, Fan; Humlekrog Greve, Mogens

    2013-04-01

    The representation of local soil variability with acceptable accuracy and precision is dependent on the spatial sampling strategy and can vary with a soil property. Therefore, soil mapping can be expensive when conventional soil analyses are involved. Visible near infrared spectroscopy (vis-NIR) is considered a cost-effective method due to labour savings and relative accuracy. However, savings may be offset by the costs associated with number of samples and sample preparation. The objective of this study was to find the most optimal way to predict field scale total organic carbon (TOC) and texture. To optimize the vis-NIR calibrations the effects of sample preparation and number of samples on the predictive ability of models with regard to the spatial distribution of TOC and texture were investigated. Conditioned Latin hypercube sampling (cLHs) method was used to select 125 sampling locations from an agricultural field in Denmark, using electromagnetic induction (EMI) and digital elevation model (DEM) data. The soil samples were scanned in three states (field moist, air dried and sieved to 2 mm) with a vis-NIR spectrophotometer (LabSpec 5100, ASD Inc., USA). The Kennard-Stone algorithm was applied to select 50 representative soil spectra for the laboratory analysis of TOC and texture. In order to investigate how to minimize the costs of reference analysis, additional smaller subsets (15, 30 and 40) of samples were selected for calibration. The performance of field calibrations using spectra of soils at the three states as well as using different numbers of calibration samples was compared. Final models were then used to predict the remaining 75 samples. Maps of predicted soil properties where generated with Empirical Bayesian Kriging. The results demonstrated that regardless the state of the scanned soil, the regression models and the final prediction maps were similar for most of the soil properties. Nevertheless, as expected, models based on spectra from field

  11. Weak electric fields detectability in a noisy neural network.

    Science.gov (United States)

    Zhao, Jia; Deng, Bin; Qin, Yingmei; Men, Cong; Wang, Jiang; Wei, Xile; Sun, Jianbing

    2017-02-01

    We investigate the detectability of weak electric field in a noisy neural network based on Izhikevich neuron model systematically. The neural network is composed of excitatory and inhibitory neurons with similar ratio as that in the mammalian neocortex, and the axonal conduction delays between neurons are also considered. It is found that the noise intensity can modulate the detectability of weak electric field. Stochastic resonance (SR) phenomenon induced by white noise is observed when the weak electric field is added to the network. It is interesting that SR almost disappeared when the connections between neurons are cancelled, suggesting the amplification effects of the neural coupling on the synchronization of neuronal spiking. Furthermore, the network parameters, such as the connection probability, the synaptic coupling strength, the scale of neuron population and the neuron heterogeneity, can also affect the detectability of the weak electric field. Finally, the model sensitivity is studied in detail, and results show that the neural network model has an optimal region for the detectability of weak electric field signal.

  12. Exploring phylogenetic hypotheses via Gibbs sampling on evolutionary networks

    OpenAIRE

    Yu, Yun; Jermaine, Christopher; Nakhleh, Luay

    2016-01-01

    Abstract Background Phylogenetic networks are leaf-labeled graphs used to model and display complex evolutionary relationships that do not fit a single tree. There are two classes of phylogenetic networks: Data-display networks and evolutionary networks. While data-display networks are very commonly used to explore data, they are not amenable to incorporating probabilistic models of gene and genome evolution. Evolutionary networks, on the other hand, can accommodate such probabilistic models,...

  13. Tick-, mosquito-, and rodent-borne parasite sampling designs for the National Ecological Observatory Network

    Science.gov (United States)

    Springer, Yuri P; Hoekman, David; Johnson, Pieter TJ; Duffy, Paul A; Hufft, Rebecca A.; Barnett, David T.; Allan, Brian F.; Amman, Brian R; Barker, Christopher M; Barrera, Roberto; Beard, Charles B; Beati, Lorenza; Begon, Mike; Blackmore, Mark S; Bradshaw, William E; Brisson, Dustin; Calisher, Charles H.; Childs, James E; Diuk-Wasser, Maria A.; Douglass, Richard J; Eisen, Rebecca J; Foley, Desmond H; Foley, Janet E.; Gaff, Holly D; Gardner, Scott L; Ginsberg, Howard; Glass, Gregory E; Hamer, Sarah A; Hayden, Mary H; Hjelle, Brian; Holzapfel, Christina M; Juliano, Steven A.; Kramer, Laura D.; Kuenzi, Amy J.; LaDeau, Shannon L.; Livdahl, Todd P.; Mills, James N.; Moore, Chester G.; Morand, Serge; Nasci, Roger S.; Ogden, Nicholas H.; Ostfeld, Richard S.; Parmenter, Robert R.; Piesman, Joseph; Reisen, William K.; Savage, Harry M.; Sonenshine, Daniel E.; Swei, Andrea; Yabsley, Michael J.

    2016-01-01

    Parasites and pathogens are increasingly recognized as significant drivers of ecological and evolutionary change in natural ecosystems. Concurrently, transmission of infectious agents among human, livestock, and wildlife populations represents a growing threat to veterinary and human health. In light of these trends and the scarcity of long-term time series data on infection rates among vectors and reservoirs, the National Ecological Observatory Network (NEON) will collect measurements and samples of a suite of tick-, mosquito-, and rodent-borne parasites through a continental-scale surveillance program. Here, we describe the sampling designs for these efforts, highlighting sampling priorities, field and analytical methods, and the data as well as archived samples to be made available to the research community. Insights generated by this sampling will advance current understanding of and ability to predict changes in infection and disease dynamics in novel, interdisciplinary, and collaborative ways.

  14. Routing optimization in networks based on traffic gravitational field model

    Science.gov (United States)

    Liu, Longgeng; Luo, Guangchun

    2017-04-01

    For research on the gravitational field routing mechanism on complex networks, we further analyze the gravitational effect of paths. In this study, we introduce the concept of path confidence degree to evaluate the unblocked reliability of paths that it takes the traffic state of all nodes on the path into account from the overall. On the basis of this, we propose an improved gravitational field routing protocol considering all the nodes’ gravities on the path and the path confidence degree. In order to evaluate the transmission performance of the routing strategy, an order parameter is introduced to measure the network throughput by the critical value of phase transition from a free-flow phase to a jammed phase, and the betweenness centrality is used to evaluate the transmission performance and traffic congestion of the network. Simulation results show that compared with the shortest-path routing strategy and the previous gravitational field routing strategy, the proposed algorithm improves the network throughput considerably and effectively balances the traffic load within the network, and all nodes in the network are utilized high efficiently. As long as γ ≥ α, the transmission performance can reach the maximum and remains unchanged for different α and γ, which ensures that the proposed routing protocol is high efficient and stable.

  15. Field Exploration and Life Detection Sampling Through Planetary Analogue Sampling (FELDSPAR).

    Science.gov (United States)

    Stockton, A.; Amador, E. S.; Cable, M. L.; Cantrell, T.; Chaudry, N.; Cullen, T.; Duca, Z.; Gentry, D. M.; Kirby, J.; Jacobsen, M.; style="text-decoration: none; " href="javascript:void(0); " onClick="displayelement('author_20170003907'); toggleEditAbsImage('author_20170003907_show'); toggleEditAbsImage('author_20170003907_hide'); "> style="display:inline; width:12px; height:12px; " src="images/arrow-up.gif" width="12" height="12" border="0" alt="hide" id="author_20170003907_show"> style="width:12px; height:12px; display:none; " src="images/arrow-down.gif" width="12" height="12" border="0" alt="hide" id="author_20170003907_hide">

    2017-01-01

    Exploration missions to Mars rely on rovers to perform analyses over small sampling areas; however, landing sites for these missions are selected based on large-scale, low-resolution remote data. The use of Earth analogue environments to estimate the multi-scale spatial distributions of key signatures of habitability can help ensure mission science goals are met. A main goal of FELDSPAR is to conduct field operations analogous to Mars sample return in its science, operations, and technology from landing site selection, to in-field sampling location selection, remote or stand-off analysis, in situ analysis, and home laboratory analysis. Lava fields and volcanic regions are relevant analogues to Martian landscapes due to desiccation, low nutrient availability, and temperature extremes. Operationally, many Icelandic lava fields are remote enough to require that field expeditions address several sampling constraints that are experienced in robotic exploration, including in situ and sample return missions. The Fimmvruhls lava field was formed by a basaltic effusive eruption associated with the 2010 Eyjafjallajkull eruption. Mlifellssandur is a recently deglaciated plain to the north of the Myrdalsjkull glacier. Holuhraun was formed by a 2014 fissure eruptions just north of the large Vatnajkull glacier. Dyngjusandur is an alluvial plain apparently kept barren by repeated mechanical weathering. Informed by our 2013 expedition, we collected samples in nested triangular grids every decade from the 10 cm scale to the 1 km scale (as permitted by the size of the site). Satellite imagery is available for older sites, and for Mlifellssandur, Holuhraun, and Dyngjusandur we obtained overhead imagery at 1 m to 200 m elevation. PanCam-style photographs were taken in the field by sampling personnel. In-field reflectance spectroscopy was also obtained with an ASD spectrometer in Dyngjusandur. All sites chosen were 'homogeneous' in apparent color, morphology, moisture, grain size, and

  16. Scanning Electron Microscopy with Samples in an Electric Field

    Science.gov (United States)

    Frank, Ludĕk; Hovorka, Miloš; Mikmeková, Šárka; Mikmeková, Eliška; Müllerová, Ilona; Pokorná, Zuzana

    2012-01-01

    The high negative bias of a sample in a scanning electron microscope constitutes the “cathode lens” with a strong electric field just above the sample surface. This mode offers a convenient tool for controlling the landing energy of electrons down to units or even fractions of electronvolts with only slight readjustments of the column. Moreover, the field accelerates and collimates the signal electrons to earthed detectors above and below the sample, thereby assuring high collection efficiency and high amplification of the image signal. One important feature is the ability to acquire the complete emission of the backscattered electrons, including those emitted at high angles with respect to the surface normal. The cathode lens aberrations are proportional to the landing energy of electrons so the spot size becomes nearly constant throughout the full energy scale. At low energies and with their complete angular distribution acquired, the backscattered electron images offer enhanced information about crystalline and electronic structures thanks to contrast mechanisms that are otherwise unavailable. Examples from various areas of materials science are presented.

  17. SOLAR CYCLE VARIATION OF THE INTER-NETWORK MAGNETIC FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Chunlan; Wang, Jingxiu, E-mail: cljin@nao.cas.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2015-06-20

    The solar inter-network magnetic field is the weakest component of solar magnetism, but it contributes most of the solar surface magnetic flux. The study of its origin has been constrained by the inadequate tempospatial resolution and sensitivity of polarization observations. With dramatic advances in spatial resolution and detecting sensitivity, the solar spectropolarimetry provided by the Solar Optical Telescope on board Hinode in an interval from the solar minimum to maximum of cycle 24 opens an unprecedented opportunity to study the cyclic behavior of the solar inter-network magnetic field. More than 1000 Hinode magnetograms observed from 2007 January to 2014 August are selected in the study. It has been found that there is a very slight correlation between sunspot number and magnetic field at the inter-network flux spectrum. From solar minimum to maximum of cycle 24, the flux density of the solar inter-network field is invariant, at 10 ± 1 G. The observations suggest that the inter-network magnetic field does not arise from flux diffusion or flux recycling of solar active regions, thereby indicating the existence of a local small-scale dynamo. Combining the full-disk magnetograms observed by the Solar and Heliospheric Observatory/Michelson Doppler Imager and the Solar Dynamics Observatory/Helioseismic and Magnetic Imager in the same period, we find that the area ratio of the inter-network region to the full disk of the Sun apparently decreases from solar minimum to maximum but always exceeds 60%, even in the phase of solar maximum.

  18. Mean field analysis of algorithms for scale-free networks in molecular biology.

    Science.gov (United States)

    Konini, S; Janse van Rensburg, E J

    2017-01-01

    The sampling of scale-free networks in Molecular Biology is usually achieved by growing networks from a seed using recursive algorithms with elementary moves which include the addition and deletion of nodes and bonds. These algorithms include the Barabási-Albert algorithm. Later algorithms, such as the Duplication-Divergence algorithm, the Solé algorithm and the iSite algorithm, were inspired by biological processes underlying the evolution of protein networks, and the networks they produce differ essentially from networks grown by the Barabási-Albert algorithm. In this paper the mean field analysis of these algorithms is reconsidered, and extended to variant and modified implementations of the algorithms. The degree sequences of scale-free networks decay according to a powerlaw distribution, namely P(k) ∼ k-γ, where γ is a scaling exponent. We derive mean field expressions for γ, and test these by numerical simulations. Generally, good agreement is obtained. We also found that some algorithms do not produce scale-free networks (for example some variant Barabási-Albert and Solé networks).

  19. Using a Control System Ethernet Network as a Field Bus

    CERN Document Server

    De Van, William R; Lawson, Gregory S; Wagner, William H; Wantland, David M; Williams, Ernest

    2005-01-01

    A major component of a typical accelerator distributed control system (DCS) is a dedicated, large-scale local area communications network (LAN). The SNS EPICS-based control system uses a LAN based on the popular IEEE-802.3 set of standards (Ethernet). Since the control system network infrastructure is available throughout the facility, and since Ethernet-based controllers are readily available, it is tempting to use the control system LAN for "fieldbus" communications to low-level control devices (e.g. vacuum controllers; remote I/O). These devices may or may not be compatible with the high-level DCS protocols. This paper presents some of the benefits and risks of combining high-level DCS communications with low-level "field bus" communications on the same network, and describes measures taken at SNS to promote compatibility between devices connected to the control system network.

  20. Sampling dynamic networks with application to investigation of HIV epidemic drivers.

    Science.gov (United States)

    Goyal, Ravi; De Gruttola, Victor

    2015-09-01

    We propose a method for randomly sampling dynamic networks that permits isolation of the impact of different network features on processes that propagate on networks. The new methods permit uniform sampling of dynamic networks in ways that ensure that they are consistent with both a given cumulative network and with specified values for constraints on the dynamic network properties. Development of such methods is challenging because modifying one network property will generally tend to modify others as well. Methods to sample constrained dynamic networks are particularly useful in the investigation of network-based interventions that target and modify specific dynamic network properties, especially in settings where the whole network is unobservable and therefore many network properties are unmeasurable. We illustrate this method by investigating the incremental impact of changes in networks properties that are relevant for the spread of infectious diseases, such as concurrency in sexual relationships. Development of the method is motivated by the challenges that arise in investigating the role of HIV epidemic drivers due to the often limited information available about contact networks. The proposed methods for randomly sampling dynamic networks facilitate investigation of the type of network data that can best contribute to an understanding of the HIV epidemic dynamics as well as of the limitations of conclusions drawn in the absence of such information. Hence, the methods are intended to aid in the design and interpretation of studies of network-based interventions. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Fourier Interpolation of Sparsely and Irregularly Sampled Potential Field Data

    Science.gov (United States)

    Saleh, R.; Bailey, R. C.

    2011-12-01

    Sparsely and irregularly sampled values of potential fields on the Earth's surface need to be interpolated to be presented as maps. For display purposes, the choice of an interpolation method may be largely an aesthetic choice. However, if derived quantities such as spatial derivatives of the field are also required for display, it is important that interpolation respect the physics of Laplace's equation. Examples would be the derivation of equivalent surface currents for interpretation purposes from a magnetotelluric hypothetical event map of the horizontal magnetic fields, or the derivation of tensor gravity gradients from ground data for comparison with an airborne survey. Various methods for interpolating while respecting Laplace's equation date back nearly fifty years, to Dampney's 1969 equivalent source technique. In that and comparable methods, a set of effective sources below the Earth's surface is found which is consistent with the data, and used to calculate the field away from data locations. Because the interpolation is not unique, the source depth can be used as a parameter to maximally suppress the indeterminate high frequency part of the resulting map while retaining consistency with the data. Here, we take advantage of modern computing power to recast the interpolation problem as an inverse problem: that of determining the Fourier transform of the field at the Earth's surface subject to the constraints of fitting the data while minimizing a model norm which measures the high-frequency content of the resulting map. User decisions about the number of equivalent sources or their depths are not required. The approach is not fundamentally different from that used to determine planetary gravity or magnetic fields from satellite measurements, except that our application is designed for extremely under-sampled situations and is formulated in Cartesian coordinates. To avoid artificially constraining the frequency content of the resulting transform, we choose

  2. Novel field sampling procedure for the determination of methiocarb residues in surface waters from rice fields.

    Science.gov (United States)

    Primus, T M; Kohler, D J; Avery, M; Bolich, P; Way, M O; Johnston, J J

    2001-12-01

    Methiocarb was extracted from surface water samples collected at experimental rice field sites in Louisiana and Texas. The sampling system consisted of a single-stage 90-mm Empore extraction disk unit equipped with a battery-powered vacuum pump. After extraction, the C-18 extraction disks were stored in an inert atmosphere at -10 degrees C and shipped overnight to the laboratory. The disks were extracted with methanol and the extracts analyzed by reversed-phase high-performance liquid chromatography with a methanol/water mobile phase. Methiocarb was detected by ultraviolet absorption at 223 nm and quantified with the use of calibration standards. Recoveries from control surface water samples fortified at 5.0, 10, 50, and 100 ng/mL methiocarb averaged 92 +/- 7%. A method limit of detection for methiocarb in rice field surface water was estimated to be 0.23 ng/mL at 223 nm.

  3. Field data analysis of active chlorine-containing stormwater samples.

    Science.gov (United States)

    Zhang, Qianyi; Gaafar, Mohamed; Yang, Rong-Cai; Ding, Chen; Davies, Evan G R; Bolton, James R; Liu, Yang

    2018-01-15

    Many municipalities in Canada and all over the world use chloramination for drinking water secondary disinfection to avoid DBPs formation from conventional chlorination. However, the long-lasting monochloramine (NH2Cl) disinfectant can pose a significant risk to aquatic life through its introduction into municipal storm sewer systems and thus fresh water sources by residential, commercial, and industrial water uses. To establish general total active chlorine (TAC) concentrations in discharges from storm sewers, the TAC concentration was measured in stormwater samples in Edmonton, Alberta, Canada, during the summers of 2015 and 2016 under both dry and wet weather conditions. The field-sampling results showed TAC concentration variations from 0.02 to 0.77 mg/L in summer 2015, which exceeds the discharge effluent limit of 0.02 mg/L. As compared to 2015, the TAC concentrations were significantly lower during the summer 2016 (0-0.24 mg/L), for which it is believed that the higher precipitation during summer 2016 reduced outdoor tap water uses. Since many other cities also use chloramines as disinfectants for drinking water disinfection, the TAC analysis from Edmonton may prove useful for other regions as well. Other physicochemical and biological characteristics of stormwater and storm sewer biofilm samples were also analyzed, and no significant difference was found during these two years. Higher density of AOB and NOB detected in the storm sewer biofilm of residential areas - as compared with other areas - generally correlated to high concentrations of ammonium and nitrite in this region in both of the two years, and they may have contributed to the TAC decay in the storm sewers. The NH2Cl decay laboratory experiments illustrate that dissolved organic carbon (DOC) concentration is the dominant factor in determining the NH2Cl decay rate in stormwater samples. The high DOC concentrations detected from a downstream industrial sampling location may contribute to a high

  4. Pumping tests in networks of multilevel sampling wells: Motivation and methodology

    Science.gov (United States)

    Butler, J.J.; McElwee, C.D.; Bohling, G.C.

    1999-01-01

    The identification of spatial variations in hydraulic conductivity (K) on a scale of relevance for transport investigations has proven to be a considerable challenge. Recently, a new field method for the estimation of interwell variations in K has been proposed. This method, hydraulic tomography, essentially consists of a series of short-term pumping tests performed in a tomographic-like arrangement. In order to fully realize the potential of this approach, information about lateral and vertical variations in pumping-induced head changes (drawdown) is required with detail that has previously been unobtainable in the field. Pumping tests performed in networks of multilevel sampling (MLS) wells can provide data of the needed density if drawdown can accurately and rapidly be measured in the small-diameter tubing used in such wells. Field and laboratory experiments show that accurate transient drawdown data can be obtained in the small-diameter MLS tubing either directly with miniature fiber-optic pressure sensors or indirectly using air-pressure transducers. As with data from many types of hydraulic tests, the quality of drawdown measurements from MLS tubing is quite dependent on the effectiveness of well development activities. Since MLS ports of the standard design are prone to clogging and are difficult to develop, alternate designs are necessary to ensure accurate drawdown measurements. Initial field experiments indicate that drawdown measurements obtained from pumping tests performed in MLS networks have considerable potential for providing valuable information about spatial variations in hydraulic conductivity.

  5. Randomly evolving idiotypic networks: modular mean field theory.

    Science.gov (United States)

    Schmidtchen, Holger; Behn, Ulrich

    2012-07-01

    We develop a modular mean field theory for a minimalistic model of the idiotypic network. The model comprises the random influx of new idiotypes and a deterministic selection. It describes the evolution of the idiotypic network towards complex modular architectures, the building principles of which are known. The nodes of the network can be classified into groups of nodes, the modules, which share statistical properties. Each node experiences only the mean influence of the groups to which it is linked. Given the size of the groups and linking between them the statistical properties such as mean occupation, mean lifetime, and mean number of occupied neighbors are calculated for a variety of patterns and compared with simulations. For a pattern which consists of pairs of occupied nodes correlations are taken into account.

  6. Effects of sampling completeness on the structure of plant-pollinator networks.

    Science.gov (United States)

    Rivera-Hutinel, A; Bustamante, R O; Marín, V H; Medel, R

    2012-07-01

    Plant-animal interaction networks provide important information on community organization. One of the most critical assumptions of network analysis is that the observed interaction patterns constitute an adequate sample of the set of interactions present in plant-animal communities. In spite of its importance, few studies have evaluated this assumption, and in consequence, there is no consensus on the sensitivity of network metrics to sampling methodological shortcomings. In this study we examined how variation in sampling completeness influences the estimation of six network metrics frequently used in the literature (connectance, nestedness, modularity, robustness to species loss, path length, and centralization). We analyzed data of 186 flowering plants and 336 pollinator species in 10 networks from a forest-fragmented system in central Chile. Using species-based accumulation curves, we estimated the deviation of network metrics in undersampled communities with respect to exhaustively sampled communities and the effect of network size and sampling evenness on network metrics. Our results indicate that: (1) most metrics were affected by sampling completeness but differed in their sensitivity to sampling effort; (2) nestedness, modularity, and robustness to species loss were less influenced by insufficient sampling than connectance, path length, and centralization; (3) robustness was mildly influenced by sampling evenness. These results caution studies that summarize information from databases with high, or unknown, heterogeneity in sampling effort per species and should stimulate researchers to report sampling intensity to standardize its effects in the search for broad patterns in plant-pollinator networks.

  7. Workshop on Thermal Field Theory to Neural Networks

    CERN Document Server

    Veneziano, Gabriele; Aurenche, Patrick

    1996-01-01

    Tanguy Altherr was a Fellow in the Theory Division at CERN, on leave from LAPP (CNRS) Annecy. At the time of his accidental death in July 1994, he was only 31.A meeting was organized at CERN, covering the various aspects of his scientific interests: thermal field theory and its applications to hot or dense media, neural networks and its applications to high energy data analysis. Speakers were among his closest collaborators and friends.

  8. A SAMPLE OF OB STARS THAT FORMED IN THE FIELD

    Energy Technology Data Exchange (ETDEWEB)

    Oey, M. S.; Lamb, J. B.; Kushner, C. T.; Pellegrini, E. W.; Graus, A. S. [Department of Astronomy, University of Michigan, 830 Dennison Building, 500 Church Street, Ann Arbor, MI 48109-1042 (United States)

    2013-05-01

    We present a sample of 14 OB stars in the Small Magellanic Cloud that meet strong criteria for having formed under extremely sparse star-forming conditions in the field. These stars are a minimum of 28 pc in projection from other OB stars, and they are centered within symmetric, round H II regions. They show no evidence of bow shocks, implying that the targets are not transverse runaway stars. Their radial velocities relative to local H I also indicate that they are not line-of-sight runaway stars. A friends-of-friends analysis shows that nine of the objects present a few low-mass companion stars, with typical mass ratios for the two highest-mass stars of around 0.1. This further substantiates that these OB stars formed in place, and that they can and do form in extremely sparse conditions. This poses strong constraints on theories of star formation and challenges proposed relations between cluster mass and maximum stellar mass.

  9. Reward and Punishment based Cooperative Adaptive Sampling in Wireless Sensor Networks

    NARCIS (Netherlands)

    Masoum, Alireza; Meratnia, Nirvana; Taghikhaki, Zahra; Havinga, Paul J.M.

    2010-01-01

    Energy conservation is one of the main concerns in wireless sensor networks. One of the mechanisms to better manage energy in wireless sensor networks is adaptive sampling, by which instead of using a fixed frequency interval for sensing and data transmission, the wireless sensor network employs a

  10. Note on neural network sampling for Bayesian inference of mixture processes

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); H.K. van Dijk (Herman)

    2007-01-01

    textabstractIn this paper we show some further experiments with neural network sampling, a class of sampling methods that make use of neural network approximations to (posterior) densities, introduced by Hoogerheide et al. (2007). We consider a method where a mixture of Student's t densities, which

  11. Deep recurrent conditional random field network for protein secondary prediction

    DEFF Research Database (Denmark)

    Johansen, Alexander Rosenberg; Sønderby, Søren Kaae; Sønderby, Casper Kaae

    2017-01-01

    Deep learning has become the state-of-the-art method for predicting protein secondary structure from only its amino acid residues and sequence profile. Building upon these results, we propose to combine a bi-directional recurrent neural network (biRNN) with a conditional random field (CRF), which...... of the labels for all time-steps. We condition the CRF on the output of biRNN, which learns a distributed representation based on the entire sequence. The biRNN-CRF is therefore close to ideally suited for the secondary structure task because a high degree of cross-talk between neighboring elements can...

  12. Field evaluation of personal sampling methods for multiple bioaerosols.

    Directory of Open Access Journals (Sweden)

    Chi-Hsun Wang

    Full Text Available Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min. Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols.

  13. Understanding the effects of administrative boundary in sampling spatially embedded networks

    Science.gov (United States)

    Chi, Guanghua; Liu, Yu; Shi, Li; Gao, Yong

    2017-01-01

    When analyzing spatially embedded networks, networks consisting of nodes and connections within an administrative boundary are commonly analyzed directly without considering possible errors or biases due to lost connections to nodes outside the network. However, connections exist not only within administrative boundaries but also to nodes outside of the boundaries. This study empirically analyzed the geographical boundary problem using a mobile communication network constructed based on mobile phone data collected in Heilongjiang province, China. We find that although many connections outside of the administrative boundary are lost, sampled networks based on administrative boundaries perform relatively well in terms of degree and clustering coefficient. We find that the mechanisms behind the reliability of these sampled networks include the effects of distance decay and cohesion strength in administrative regions on spatially embedded networks.

  14. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, Willem R.W.; Mandjes, M.R.H.

    2010-01-01

    This article considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue; it is known that in this setting “traditional‿ state-independent importance-sampling distributions

  15. State-dependent importance sampling for a Jackson tandem network

    NARCIS (Netherlands)

    Miretskiy, D.; Scheinhardt, W.; Mandjes, M.

    2010-01-01

    This article considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jacksonian two-node tandem queue; it is known that in this setting "traditional" state-independent importance-sampling distributions

  16. Interactive Editing of GigaSample Terrain Fields

    KAUST Repository

    Treib, Marc

    2012-05-01

    Previous terrain rendering approaches have addressed the aspect of data compression and fast decoding for rendering, but applications where the terrain is repeatedly modified and needs to be buffered on disk have not been considered so far. Such applications require both decoding and encoding to be faster than disk transfer. We present a novel approach for editing gigasample terrain fields at interactive rates and high quality. To achieve high decoding and encoding throughput, we employ a compression scheme for height and pixel maps based on a sparse wavelet representation. On recent GPUs it can encode and decode up to 270 and 730 MPix/s of color data, respectively, at compression rates and quality superior to JPEG, and it achieves more than twice these rates for lossless height field compression. The construction and rendering of a height field triangulation is avoided by using GPU ray-casting directly on the regular grid underlying the compression scheme. We show the efficiency of our method for interactive editing and continuous level-of-detail rendering of terrain fields comprised of several hundreds of gigasamples. © 2012 The Author(s).

  17. Planning Longitudinal Field Studies: Considerations in Determining Sample Size.

    Science.gov (United States)

    St.Pierre, Robert G.

    1980-01-01

    Factors that influence the sample size necessary for longitudinal evaluations include the nature of the evaluation questions, nature of available comparison groups, consistency of the treatment in different sites, effect size, attrition rate, significance level for statistical tests, and statistical power. (Author/GDC)

  18. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available in the alteration zones was chosen as the target mineral. Sampling points are distributed more intensely in regions of high probable alunite as classified by both SAM and SFF, thus representing the purest of pixels. This method leads to an efficient distribution...

  19. Focus on the emerging new fields of network physiology and network medicine

    Science.gov (United States)

    Ivanov, Plamen Ch; Liu, Kang K. L.; Bartsch, Ronny P.

    2016-10-01

    Despite the vast progress and achievements in systems biology and integrative physiology in the last decades, there is still a significant gap in understanding the mechanisms through which (i) genomic, proteomic and metabolic factors and signaling pathways impact vertical processes across cells, tissues and organs leading to the expression of different disease phenotypes and influence the functional and clinical associations between diseases, and (ii) how diverse physiological systems and organs coordinate their functions over a broad range of space and time scales and horizontally integrate to generate distinct physiologic states at the organism level. Two emerging fields, network medicine and network physiology, aim to address these fundamental questions. Novel concepts and approaches derived from recent advances in network theory, coupled dynamical systems, statistical and computational physics show promise to provide new insights into the complexity of physiological structure and function in health and disease, bridging the genetic and sub-cellular level with inter-cellular interactions and communications among integrated organ systems and sub-systems. These advances form first building blocks in the methodological formalism and theoretical framework necessary to address fundamental problems and challenges in physiology and medicine. This ‘focus on’ issue contains 26 articles representing state-of-the-art contributions covering diverse systems from the sub-cellular to the organism level where physicists have key role in laying the foundations of these new fields.

  20. On The Use Of Network Sampling In Diabetic Surveys | Nafiu ...

    African Journals Online (AJOL)

    Two estimators: Hansen-Hurwitz estimator and Horvitz-Thompson estimator were considered; and the results were obtained using a program written in Microsoft Visual C++ programming language. Keywords: Graph, Sampling Frame, Households, Hansen-Hurwitz estimator and Horvitz-Thompson estimator. JORIND Vol.

  1. Mean-field approach to evolving spatial networks, with an application to osteocyte network formation

    Science.gov (United States)

    Taylor-King, Jake P.; Basanta, David; Chapman, S. Jonathan; Porter, Mason A.

    2017-07-01

    We consider evolving networks in which each node can have various associated properties (a state) in addition to those that arise from network structure. For example, each node can have a spatial location and a velocity, or it can have some more abstract internal property that describes something like a social trait. Edges between nodes are created and destroyed, and new nodes enter the system. We introduce a "local state degree distribution" (LSDD) as the degree distribution at a particular point in state space. We then make a mean-field assumption and thereby derive an integro-partial differential equation that is satisfied by the LSDD. We perform numerical experiments and find good agreement between solutions of the integro-differential equation and the LSDD from stochastic simulations of the full model. To illustrate our theory, we apply it to a simple model for osteocyte network formation within bones, with a view to understanding changes that may take place during cancer. Our results suggest that increased rates of differentiation lead to higher densities of osteocytes, but with a smaller number of dendrites. To help provide biological context, we also include an introduction to osteocytes, the formation of osteocyte networks, and the role of osteocytes in bone metastasis.

  2. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  3. Inferring signalling networks from longitudinal data using sampling based approaches in the R-package 'ddepn'

    Directory of Open Access Journals (Sweden)

    Korf Ulrike

    2011-07-01

    Full Text Available Abstract Background Network inference from high-throughput data has become an important means of current analysis of biological systems. For instance, in cancer research, the functional relationships of cancer related proteins, summarised into signalling networks are of central interest for the identification of pathways that influence tumour development. Cancer cell lines can be used as model systems to study the cellular response to drug treatments in a time-resolved way. Based on these kind of data, modelling approaches for the signalling relationships are needed, that allow to generate hypotheses on potential interference points in the networks. Results We present the R-package 'ddepn' that implements our recent approach on network reconstruction from longitudinal data generated after external perturbation of network components. We extend our approach by two novel methods: a Markov Chain Monte Carlo method for sampling network structures with two edge types (activation and inhibition and an extension of a prior model that penalises deviances from a given reference network while incorporating these two types of edges. Further, as alternative prior we include a model that learns signalling networks with the scale-free property. Conclusions The package 'ddepn' is freely available on R-Forge and CRAN http://ddepn.r-forge.r-project.org, http://cran.r-project.org. It allows to conveniently perform network inference from longitudinal high-throughput data using two different sampling based network structure search algorithms.

  4. Phencyclidine Discoordinates Hippocampal Network Activity But Not Place Fields.

    Science.gov (United States)

    Kao, Hsin-Yi; Dvořák, Dino; Park, EunHye; Kenney, Jana; Kelemen, Eduard; Fenton, André A

    2017-12-06

    We used the psychotomimetic phencyclidine (PCP) to investigate the relationships among cognitive behavior, coordinated neural network function, and information processing within the hippocampus place cell system. We report in rats that PCP (5 mg/kg, i.p.) impairs a well learned, hippocampus-dependent place avoidance behavior in rats that requires cognitive control even when PCP is injected directly into dorsal hippocampus. PCP increases 60-100 Hz medium-freguency gamma oscillations in hippocampus CA1 and these increases correlate with the cognitive impairment caused by systemic PCP administration. PCP discoordinates theta-modulated medium-frequency and slow gamma oscillations in CA1 LFPs such that medium-frequency gamma oscillations become more theta-organized than slow gamma oscillations. CA1 place cell firing fields are preserved under PCP, but the drug discoordinates the subsecond temporal organization of discharge among place cells. This discoordination causes place cell ensemble representations of a familiar space to cease resembling pre-PCP representations despite preserved place fields. These findings point to the cognitive impairments caused by PCP arising from neural discoordination. PCP disrupts the timing of discharge with respect to the subsecond timescales of theta and gamma oscillations in the LFP. Because these oscillations arise from local inhibitory synaptic activity, these findings point to excitation-inhibition discoordination as the root of PCP-induced cognitive impairment.SIGNIFICANCE STATEMENT Hippocampal neural discharge is temporally coordinated on timescales of theta and gamma oscillations in the LFP and the discharge of a subset of pyramidal neurons called "place cells" is spatially organized such that discharge is restricted to locations called a cell's "place field." Because this temporal coordination and spatial discharge organization is thought to represent spatial knowledge, we used the psychotomimetic phencyclidine (PCP) to disrupt

  5. Electrical network method for the thermal or structural characterization of a conducting material sample or structure

    Science.gov (United States)

    Ortiz, Marco G.

    1993-01-01

    A method for modeling a conducting material sample or structure system, as an electrical network of resistances in which each resistance of the network is representative of a specific physical region of the system. The method encompasses measuring a resistance between two external leads and using this measurement in a series of equations describing the network to solve for the network resistances for a specified region and temperature. A calibration system is then developed using the calculated resistances at specified temperatures. This allows for the translation of the calculated resistances to a region temperature. The method can also be used to detect and quantify structural defects in the system.

  6. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  7. Debba China presentation on optimal field sampling for exploration targets and geochemicals

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available A presentation was done at the Chinese Academy of Geological Science in October 2008 on optimal field sampling for both exploration targets and sampling for geochemicals in mine tailing areas...

  8. Field evaluation of broiler gait score using different sampling methods

    Directory of Open Access Journals (Sweden)

    AFS Cordeiro

    2009-09-01

    Full Text Available Brazil is today the world's largest broiler meat exporter; however, in order to keep this position, it must comply with welfare regulations while maintaining low production costs. Locomotion problems restrain bird movements, limiting their access to drinking and feeding equipment, and therefore their survival and productivity. The objective of this study was to evaluate locomotion deficiency in broiler chickens reared under stressful temperature conditions using three different sampling methods of birds from three different ages. The experiment consisted in determining the gait score of 28, 35, 42 and 49-day-old broilers using three different known gait scoring methods: M1, birds were randomly selected, enclosed in a circle, and then stimulated to walk out of the circle; M2, ten birds were randomly selected and gait scored; and M3, birds were randomly selected, enclosed in a circle, and then observed while walking away from the circle without stimulus to walking. Environmental temperature, relative humidity, and light intensity inside the poultry houses were recorded. No evidence of interaction between scoring method and age was found however, both method and age influenced gait score. Gait score was found to be lower at 28 days of age. The evaluation using the ten randomly selected birds within the house was the method that presented the less reliable results. Gait score results when birds were stimulated to walk were lower than when they were not simulated, independently of age. The gait scores obtained with the three tested methods and ages were higher than those considered acceptable. The highest frequency of normal gait score (0 represented 50% of the flock. These results may be related to heat stress during rearing. Average gait score incresead with average ambient temperature, relative humidity, and light intensity. The evaluation of gait score to detect locomotion problems of broilers under rearing conditions seems subjective and

  9. Active-Varying Sampling-Based Fault Detection Filter Design for Networked Control Systems

    Directory of Open Access Journals (Sweden)

    Yu-Long Wang

    2014-01-01

    Full Text Available This paper is concerned with fault detection filter design for continuous-time networked control systems considering packet dropouts and network-induced delays. The active-varying sampling period method is introduced to establish a new discretized model for the considered networked control systems. The mutually exclusive distribution characteristic of packet dropouts and network-induced delays is made full use of to derive less conservative fault detection filter design criteria. Compared with the fault detection filter design adopting a constant sampling period, the proposed active-varying sampling-based fault detection filter design can improve the sensitivity of the residual signal to faults and shorten the needed time for fault detection. The simulation results illustrate the merits and effectiveness of the proposed fault detection filter design.

  10. Random Walks on Directed Networks: Inference and Respondent-driven Sampling

    CERN Document Server

    Malmros, Jens; Britton, Tom

    2013-01-01

    Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...

  11. Multiple seed structure and disconnected networks in respondent-driven sampling

    CERN Document Server

    Malmros, Jens

    2016-01-01

    Respondent-driven sampling (RDS) is a link-tracing sampling method that is especially suitable for sampling hidden populations. RDS combines an efficient snowball-type sampling scheme with inferential procedures that yield unbiased population estimates under some assumptions about the sampling procedure and population structure. Several seed individuals are typically used to initiate RDS recruitment. However, standard RDS estimation theory assume that all sampled individuals originate from only one seed. We present an estimator, based on a random walk with teleportation, which accounts for the multiple seed structure of RDS. The new estimator can also be used on populations with disconnected social networks. We numerically evaluate our estimator by simulations on artificial and real networks. Our estimator outperforms previous estimators, especially when the proportion of seeds in the sample is large. We recommend our new estimator to be used in RDS studies, in particular when the number of seeds is large or ...

  12. Flexible sampling large-scale social networks by self-adjustable random walk

    Science.gov (United States)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  13. Research on wind field algorithm of wind lidar based on BP neural network and grey prediction

    Science.gov (United States)

    Chen, Yong; Chen, Chun-Li; Luo, Xiong; Zhang, Yan; Yang, Ze-hou; Zhou, Jie; Shi, Xiao-ding; Wang, Lei

    2018-01-01

    This paper uses the BP neural network and grey algorithm to forecast and study radar wind field. In order to reduce the residual error in the wind field prediction which uses BP neural network and grey algorithm, calculating the minimum value of residual error function, adopting the residuals of the gray algorithm trained by BP neural network, using the trained network model to forecast the residual sequence, using the predicted residual error sequence to modify the forecast sequence of the grey algorithm. The test data show that using the grey algorithm modified by BP neural network can effectively reduce the residual value and improve the prediction precision.

  14. Empirically determining the sample size for large-scale gene network inference algorithms.

    Science.gov (United States)

    Altay, G

    2012-04-01

    The performance of genome-wide gene regulatory network inference algorithms depends on the sample size. It is generally considered that the larger the sample size, the better the gene network inference performance. Nevertheless, there is not adequate information on determining the sample size for optimal performance. In this study, the author systematically demonstrates the effect of sample size on information-theory-based gene network inference algorithms with an ensemble approach. The empirical results showed that the inference performances of the considered algorithms tend to converge after a particular sample size region. As a specific example, the sample size region around ≃64 is sufficient to obtain the most of the inference performance with respect to precision using the representative algorithm C3NET on the synthetic steady-state data sets of Escherichia coli and also time-series data set of a homo sapiens subnetworks. The author verified the convergence result on a large, real data set of E. coli as well. The results give evidence to biologists to better design experiments to infer gene networks. Further, the effect of cutoff on inference performances over various sample sizes is considered. [Includes supplementary material].

  15. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Lars Buesing

    2011-11-01

    Full Text Available The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  16. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  17. Composite Extension Finite Fields for Low Overhead Network Coding

    DEFF Research Database (Denmark)

    Heide, Janus; Roetter, Daniel Enrique Lucani

    2015-01-01

    packet. This work advocates the use of multiple composite extension finite fields to address these challenges. The key of our approach is to design a series of finite fields where increasingly larger fields are based on a previous smaller field. For example, the design of a field with 256 elements F2222...

  18. Inference in Belief Network using Logic Sampling and Likelihood Weighing algorithms

    Directory of Open Access Journals (Sweden)

    K. S. JASMINE

    2013-11-01

    Full Text Available Over the time in computational history, belief networks have become an increasingly popular mechanism for dealing with uncertainty in systems. It is known that identifying the probability values of belief network nodes given a set of evidence is not amenable in general. Many different simulation algorithms for approximating solution to this problem have been proposed and implemented. This paper details the implementation of such algorithms, in particular the two algorithms of the belief networks namely Logic sampling and the likelihood weighing are discussed. A detailed description of the algorithm is given with observed results. These algorithms play crucial roles in dynamic decision making in any situation of uncertainty.

  19. Operable Unit 3-13, Group 3, Other Surface Soils (Phase II) Field Sampling Plan

    Energy Technology Data Exchange (ETDEWEB)

    G. L. Schwendiman

    2006-07-27

    This Field Sampling Plan describes the Operable Unit 3-13, Group 3, Other Surface Soils, Phase II remediation field sampling activities to be performed at the Idaho Nuclear Technology and Engineering Center located within the Idaho National Laboratory Site. Sampling activities described in this plan support characterization sampling of new sites, real-time soil spectroscopy during excavation, and confirmation sampling that verifies that the remedial action objectives and remediation goals presented in the Final Record of Decision for Idaho Nuclear Technology and Engineering Center, Operable Unit 3-13 have been met.

  20. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  1. Validation of networks derived from snowball sampling of municipal science education actors

    DEFF Research Database (Denmark)

    von der Fehr, Ane; Sølberg, Jan; Bruun, Jesper

    2016-01-01

    Social network analysis (SNA) has been used in many educational studies in the past decade, but what these studies have in common is that the populations in question in most cases are defined and known to the researchers studying the networks. Snowball sampling is an SNA methodology most often used...... to study hidden populations, for example, groups of homosexual people, drug users or people with sexually transmitted diseases. By use of a snowball sampling approach, this study mapped municipal social networks of educational actors, who were otherwise hidden to the researchers. Subsequently...... predictions based on existing knowledge of the municipalities aligned with SNA data. However, these discrepancies could be explained by development in the municipalities in the time following previous investigations. This study shows that snowball sampling is an applicable method to use for mapping hidden...

  2. Estimating the Size of a Large Network and its Communities from a Random Sample.

    Science.gov (United States)

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  3. Communication: Multiple atomistic force fields in a single enhanced sampling simulation

    Science.gov (United States)

    Hoang Viet, Man; Derreumaux, Philippe; Nguyen, Phuong H.

    2015-07-01

    The main concerns of biomolecular dynamics simulations are the convergence of the conformational sampling and the dependence of the results on the force fields. While the first issue can be addressed by employing enhanced sampling techniques such as simulated tempering or replica exchange molecular dynamics, repeating these simulations with different force fields is very time consuming. Here, we propose an automatic method that includes different force fields into a single advanced sampling simulation. Conformational sampling using three all-atom force fields is enhanced by simulated tempering and by formulating the weight parameters of the simulated tempering method in terms of the energy fluctuations, the system is able to perform random walk in both temperature and force field spaces. The method is first demonstrated on a 1D system and then validated by the folding of the 10-residue chignolin peptide in explicit water.

  4. Network methods for describing sample relationships in genomic datasets: application to Huntington's disease.

    Science.gov (United States)

    Oldham, Michael C; Langfelder, Peter; Horvath, Steve

    2012-06-12

    Genomic datasets generated by new technologies are increasingly prevalent in disparate areas of biological research. While many studies have sought to characterize relationships among genomic features, commensurate efforts to characterize relationships among biological samples have been less common. Consequently, the full extent of sample variation in genomic studies is often under-appreciated, complicating downstream analytical tasks such as gene co-expression network analysis. Here we demonstrate the use of network methods for characterizing sample relationships in microarray data generated from human brain tissue. We describe an approach for identifying outlying samples that does not depend on the choice or use of clustering algorithms. We introduce a battery of measures for quantifying the consistency and integrity of sample relationships, which can be compared across disparate studies, technology platforms, and biological systems. Among these measures, we provide evidence that the correlation between the connectivity and the clustering coefficient (two important network concepts) is a sensitive indicator of homogeneity among biological samples. We also show that this measure, which we refer to as cor(K,C), can distinguish biologically meaningful relationships among subgroups of samples. Specifically, we find that cor(K,C) reveals the profound effect of Huntington's disease on samples from the caudate nucleus relative to other brain regions. Furthermore, we find that this effect is concentrated in specific modules of genes that are naturally co-expressed in human caudate nucleus, highlighting a new strategy for exploring the effects of disease on sets of genes. These results underscore the importance of systematically exploring sample relationships in large genomic datasets before seeking to analyze genomic feature activity. We introduce a standardized platform for this purpose using freely available R software that has been designed to enable iterative and

  5. Epidemic risk from friendship network data: an equivalence with a non-uniform sampling of contact networks

    CERN Document Server

    Fournet, Julie

    2016-01-01

    Contacts between individuals play an important role in determining how infectious diseases spread. Various methods to gather data on such contacts co-exist, from surveys to wearable sensors. Comparisons of data obtained by different methods in the same context are however scarce, in particular with respect to their use in data-driven models of spreading processes. Here, we use a combined data set describing contacts registered by sensors and friendship relations in the same population to address this issue in a case study. We investigate if the use of the friendship network is equivalent to a sampling procedure performed on the sensor contact network with respect to the outcome of simulations of spreading processes: such an equivalence might indeed give hints on ways to compensate for the incompleteness of contact data deduced from surveys. We show that this is indeed the case for these data, for a specifically designed sampling procedure, in which respondents report their neighbors with a probability dependi...

  6. Accuracy of mean-field theory for dynamics on real-world networks.

    Science.gov (United States)

    Gleeson, James P; Melnik, Sergey; Ward, Jonathan A; Porter, Mason A; Mucha, Peter J

    2012-02-01

    Mean-field analysis is an important tool for understanding dynamics on complex networks. However, surprisingly little attention has been paid to the question of whether mean-field predictions are accurate, and this is particularly true for real-world networks with clustering and modular structure. In this paper, we compare mean-field predictions to numerical simulation results for dynamical processes running on 21 real-world networks and demonstrate that the accuracy of such theory depends not only on the mean degree of the networks but also on the mean first-neighbor degree. We show that mean-field theory can give (unexpectedly) accurate results for certain dynamics on disassortative real-world networks even when the mean degree is as low as 4.

  7. Performance evaluation of an importance sampling technique in a Jackson network

    Science.gov (United States)

    brahim Mahdipour, E.; Masoud Rahmani, Amir; Setayeshi, Saeed

    2014-03-01

    Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates.

  8. Social Representations of Hero and Everyday Hero: A Network Study from Representative Samples.

    Science.gov (United States)

    Keczer, Zsolt; File, Bálint; Orosz, Gábor; Zimbardo, Philip G

    2016-01-01

    The psychological investigation of heroism is relatively new. At this stage, inductive methods can shed light on its main aspects. Therefore, we examined the social representations of Hero and Everyday Hero by collecting word associations from two separate representative samples in Hungary. We constructed two networks from these word associations. The results show that the social representation of Hero is more centralized and it cannot be divided into smaller units. The network of Everyday Hero is divided into five units and the significance moves from abstract hero characteristics to concrete social roles and occupations exhibiting pro-social values. We also created networks from the common associations of Hero and Everyday Hero. The structures of these networks show a moderate similarity and the connections are more balanced in case of Everyday Hero. While heroism in general can be the source of inspiration, the promotion of everyday heroism can be more successful in encouraging ordinary people to recognize their own potential for heroic behavior.

  9. Sampling and measurement issues in establishing a climate reference upper air network

    Science.gov (United States)

    Gardiner, T.; Madonna, F.; Wang, J.; Whiteman, D. N.; Dykema, J.; Fassò, A.; Thorne, P. W.; Bodeker, G.

    2013-09-01

    The GCOS Reference Upper Air Network (GRUAN) is an international reference observing network, designed to meet climate requirements and to fill a major void in the current global observing system. Upper air observations within the GRUAN network will provide long-term high-quality climate records, will be used to constrain and validate data from space based remote sensors, and will provide accurate data for the study of atmospheric processes. The network covers measurements of a range of key climate variables including temperature. Implementation of the network has started, and as part of this process a number of scientific questions need to be addressed in order to establish a viable climate reference upper air network, in addition to meeting the other objectives for the network measurements. These include quantifying collocation issues for different measurement techniques including the impact on the overall uncertainty of combined measurements; change management requirements when switching between sensors; assessing the benefit of complementary measurements of the same variable using different measurement techniques; and establishing the appropriate sampling strategy to determine long-term trends. This paper reviews the work that is currently underway to address these issues.

  10. Guidelines for collection and field analysis of water-quality samples from streams in Texas

    Science.gov (United States)

    Wells, F.C.; Gibbons, W.J.; Dorsey, M.E.

    1990-01-01

    This manual provides standardized guidelines and quality-control procedures for the collection and preservation of water-quality samples and defines procedures for making field analyses of unstable constituents or properties.

  11. Mean-field equations for neuronal networks with arbitrary degree distributions.

    Science.gov (United States)

    Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex

    2017-04-01

    The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.

  12. Performance Evaluation and Parameter Optimization of Wavelength Division Multiplexing Networks with Importance Sampling Techniques

    NARCIS (Netherlands)

    Remondo Bueno, D.; Srinivasan, R.; Nicola, V.F.; van Etten, Wim; Tattje, H.E.P.

    1998-01-01

    In this paper new adaptive importance sampling techniques are applied to the performance evaluation and parameter optimization of wavelength division multiplexing (WDM) network impaired by crosstalk in an optical cross-connect. Worst-case analysis is carried out including all the beat noise terms

  13. Efficient Importance Sampling Heuristics for the Simulation of Population Overflow in Feed-Forward Queueing Networks

    NARCIS (Netherlands)

    Nicola, V.F.; Zaburnenko, T.S.

    2006-01-01

    In this paper we propose a state-dependent importance sampling heuristic to estimate the probability of population overflow in feed-forward networks. This heuristic attempts to approximate the “optimal��? state-dependent change of measure without the need for difficult analysis or costly

  14. Adaptive state-dependent importance sampling simulation of Markovian queueing networks

    NARCIS (Netherlands)

    de Boer, Pieter-Tjerk; Nicola, V.F.

    2002-01-01

    In this paper, a method is presented for the efficient estimation of rare-event (buffer overflow) probabilities in queueing networks using importance sampling. Unlike previously proposed change of measures, the one used here is not static, i.e., it depends on the buffer contents at each of the

  15. Analysis of a copper sample for the CLIC ACS study in a field emission scanning microscope

    CERN Document Server

    Muranaka, Tomoko; Leifer, Klaus; Ziemann, Volker; Navitski, Aliaksandr; Müller, Günter

    2011-01-01

    We report measurements on a diamond turned Copper sample of material intended for the CLIC accelerating structures. The first part of the measurements was performed at Bergische Universität Wuppertal using a field emission scanning microscope to localize and characterize strong emission sites. In a second part the sample was investigated in an optical microscope, a white-light profilometer and scanning electron microscope in the microstructure laboratory in Uppsala to attempt to identify the features responsible for the field emission.

  16. Feasible sampling plan for Bemisia tabaci control decision-making in watermelon fields.

    Science.gov (United States)

    Lima, Carlos Ho; Sarmento, Renato A; Pereira, Poliana S; Galdino, Tarcísio Vs; Santos, Fábio A; Silva, Joedna; Picanço, Marcelo C

    2017-11-01

    The silverleaf whitefly Bemisia tabaci is one of the most important pests of watermelon fields worldwide. Conventional sampling plans are the starting point for the generation of decision-making systems of integrated pest management programs. The aim of this study was to determine a conventional sampling plan for B. tabaci in watermelon fields. The optimal leaf for B. tabaci adult sampling was the 6th most apical leaf. Direct counting was the best pest sampling technique. Crop pest densities fitted the negative binomial distribution and had a common aggregation parameter (Kcommon ). The sampling plan consisted of evaluating 103 samples per plot. This sampling plan was conducted for 56 min, costing US$ 2.22 per sampling and with a 10% maximum evaluation error. The sampling plan determined in this study can be adopted by farmers because it enables the adequate evaluation of B. tabaci populations in watermelon fields (10% maximum evaluation error) and is a low-cost (US$ 2.22 per sampling), fast (56 min per sampling) and feasible (because it may be used in a standardized way throughout the crop cycle) technique. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  17. DOES THE VARIATION OF THE SOLAR INTRA-NETWORK HORIZONTAL FIELD FOLLOW THE SUNSPOT CYCLE?

    Energy Technology Data Exchange (ETDEWEB)

    Jin, C. L.; Wang, J. X., E-mail: cljin@nao.cas.cn [Key Laboratory of Solar Activity, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012 (China)

    2015-07-01

    The ubiquitousness of the solar inter-network horizontal magnetic field has been revealed by space-borne observations with high spatial resolution and polarization sensitivity. However, no consensus has been achieved on the origin of the horizontal field among solar physicists. For a better understanding, in this study, we analyze the cyclic variation of the inter-network horizontal field by using the spectro-polarimeter observations provided by the Solar Optical Telescope on board Hinode, covering the interval from 2008 April to 2015 February. The method of wavelength integration is adopted to achieve a high signal-to-noise ratio. It is found that from 2008 to 2015 the inter-network horizontal field does not vary when solar activity increases, and the average flux density of the inter-network horizontal field is 87 ± 1 G, In addition, the imbalance between horizontal and vertical fields also keeps invariant within the scope of deviation, i.e., 8.7 ± 0.5, from the solar minimum to maximum of solar cycle 24. This result confirms that the inter-network horizontal field is independent of the sunspot cycle. The revelation favors the idea that a local dynamo is creating and maintaining the solar inter-network horizontal field.

  18. Towards a system level understanding of non-model organisms sampled from the environment: a network biology approach.

    Directory of Open Access Journals (Sweden)

    Tim D Williams

    2011-08-01

    Full Text Available The acquisition and analysis of datasets including multi-level omics and physiology from non-model species, sampled from field populations, is a formidable challenge, which so far has prevented the application of systems biology approaches. If successful, these could contribute enormously to improving our understanding of how populations of living organisms adapt to environmental stressors relating to, for example, pollution and climate. Here we describe the first application of a network inference approach integrating transcriptional, metabolic and phenotypic information representative of wild populations of the European flounder fish, sampled at seven estuarine locations in northern Europe with different degrees and profiles of chemical contaminants. We identified network modules, whose activity was predictive of environmental exposure and represented a link between molecular and morphometric indices. These sub-networks represented both known and candidate novel adverse outcome pathways representative of several aspects of human liver pathophysiology such as liver hyperplasia, fibrosis, and hepatocellular carcinoma. At the molecular level these pathways were linked to TNF alpha, TGF beta, PDGF, AGT and VEGF signalling. More generally, this pioneering study has important implications as it can be applied to model molecular mechanisms of compensatory adaptation to a wide range of scenarios in wild populations.

  19. Percolating macropore networks in tilled topsoil: effects of sample size, minimum pore thickness and soil type

    Science.gov (United States)

    Jarvis, Nicholas; Larsbo, Mats; Koestel, John; Keck, Hannes

    2017-04-01

    The long-range connectivity of macropore networks may exert a strong control on near-saturated and saturated hydraulic conductivity and the occurrence of preferential flow through soil. It has been suggested that percolation concepts may provide a suitable theoretical framework to characterize and quantify macropore connectivity, although this idea has not yet been thoroughly investigated. We tested the applicability of percolation concepts to describe macropore networks quantified by X-ray scanning at a resolution of 0.24 mm in eighteen cylinders (20 cm diameter and height) sampled from the ploughed layer of four soils of contrasting texture in east-central Sweden. The analyses were performed for sample sizes ("regions of interest", ROI) varying between 3 and 12 cm in cube side-length and for minimum pore thicknesses ranging between image resolution and 1 mm. Finite sample size effects were clearly found for ROI's of cube side-length smaller than ca. 6 cm. For larger sample sizes, the results showed the relevance of percolation concepts to soil macropore networks, with a close relationship found between imaged porosity and the fraction of the pore space which percolated (i.e. was connected from top to bottom of the ROI). The percolating fraction increased rapidly as a function of porosity above a small percolation threshold (1-4%). This reflects the ordered nature of the pore networks. The percolation relationships were similar for all four soils. Although pores larger than 1 mm appeared to be somewhat better connected, only small effects of minimum pore thickness were noted across the range of tested pore sizes. The utility of percolation concepts to describe the connectivity of more anisotropic macropore networks (e.g. in subsoil horizons) should also be tested, although with current X-ray scanning equipment it may prove difficult in many cases to analyze sufficiently large samples that would avoid finite size effects.

  20. Sampling of high amounts of bioaerosols using a high-volume electrostatic field sampler

    DEFF Research Database (Denmark)

    Madsen, A. M.; Sharma, Anoop Kumar

    2008-01-01

    and 315 mg dust (net recovery of the lyophilized dust) was sampled during a period of 7 days, respectively. The sampling rates of the electrostatic field samplers were between 1.34 and 1.96 mg dust per hour, the value for the Gravikon was between 0.083 and 0.108 mg dust per hour and the values for the GSP...

  1. Uniform sampling of steady states in metabolic networks: heterogeneous scales and rounding.

    Directory of Open Access Journals (Sweden)

    Daniele De Martino

    Full Text Available The uniform sampling of convex polytopes is an interesting computational problem with many applications in inference from linear constraints, but the performances of sampling algorithms can be affected by ill-conditioning. This is the case of inferring the feasible steady states in models of metabolic networks, since they can show heterogeneous time scales. In this work we focus on rounding procedures based on building an ellipsoid that closely matches the sampling space, that can be used to define an efficient hit-and-run (HR Markov Chain Monte Carlo. In this way the uniformity of the sampling of the convex space of interest is rigorously guaranteed, at odds with non markovian methods. We analyze and compare three rounding methods in order to sample the feasible steady states of metabolic networks of three models of growing size up to genomic scale. The first is based on principal component analysis (PCA, the second on linear programming (LP and finally we employ the Lovazs ellipsoid method (LEM. Our results show that a rounding procedure dramatically improves the performances of the HR in these inference problems and suggest that a combination of LEM or LP with a subsequent PCA perform the best. We finally compare the distributions of the HR with that of two heuristics based on the Artificially Centered hit-and-run (ACHR, gpSampler and optGpSampler. They show a good agreement with the results of the HR for the small network, while on genome scale models present inconsistencies.

  2. Artificial Neural Network for Total Laboratory Automation to Improve the Management of Sample Dilution.

    Science.gov (United States)

    Ialongo, Cristiano; Pieri, Massimo; Bernardini, Sergio

    2017-02-01

    Diluting a sample to obtain a measure within the analytical range is a common task in clinical laboratories. However, for urgent samples, it can cause delays in test reporting, which can put patients' safety at risk. The aim of this work is to show a simple artificial neural network that can be used to make it unnecessary to predilute a sample using the information available through the laboratory information system. Particularly, the Multilayer Perceptron neural network built on a data set of 16,106 cardiac troponin I test records produced a correct inference rate of 100% for samples not requiring predilution and 86.2% for those requiring predilution. With respect to the inference reliability, the most relevant inputs were the presence of a cardiac event or surgery and the result of the previous assay. Therefore, such an artificial neural network can be easily implemented into a total automation framework to sensibly reduce the turnaround time of critical orders delayed by the operation required to retrieve, dilute, and retest the sample.

  3. Distributed H∞ Sampled-Data Filtering over Sensor Networks with Markovian Switching Topologies

    Directory of Open Access Journals (Sweden)

    Bin Yang

    2014-01-01

    Full Text Available This paper considers a distributed H∞ sampled-data filtering problem in sensor networks with stochastically switching topologies. It is assumed that the topology switching is triggered by a Markov chain. The output measurement at each sensor is first sampled and then transmitted to the corresponding filters via a communication network. Considering the effect of a transmission delay, a distributed filter structure for each sensor is given based on the sampled data from itself and its neighbor sensor nodes. As a consequence, the distributed H∞ sampled-data filtering in sensor networks under Markovian switching topologies is transformed into H∞ mean-square stability problem of a Markovian jump error system with an interval time-varying delay. By using Lyapunov Krasovskii functional and reciprocally convex approach, a new bounded real lemma (BRL is derived, which guarantees the mean-square stability of the error system with a desired H∞ performance. Based on this BRL, the topology-dependent H∞ sampled-data filters are obtained. An illustrative example is given to demonstrate the effectiveness of the proposed method.

  4. Soil specific re-calibration of water content sensors for a field-scale sensor network

    Science.gov (United States)

    Gasch, Caley K.; Brown, David J.; Anderson, Todd; Brooks, Erin S.; Yourek, Matt A.

    2015-04-01

    Obtaining accurate soil moisture data from a sensor network requires sensor calibration. Soil moisture sensors are factory calibrated, but multiple site specific factors may contribute to sensor inaccuracies. Thus, sensors should be calibrated for the specific soil type and conditions in which they will be installed. Lab calibration of a large number of sensors prior to installation in a heterogeneous setting may not be feasible, and it may not reflect the actual performance of the installed sensor. We investigated a multi-step approach to retroactively re-calibrate sensor water content data from the dielectric permittivity readings obtained by sensors in the field. We used water content data collected since 2009 from a sensor network installed at 42 locations and 5 depths (210 sensors total) within the 37-ha Cook Agronomy Farm with highly variable soils located in the Palouse region of the Northwest United States. First, volumetric water content was calculated from sensor dielectric readings using three equations: (1) a factory calibration using the Topp equation; (2) a custom calibration obtained empirically from an instrumented soil in the field; and (3) a hybrid equation that combines the Topp and custom equations. Second, we used soil physical properties (particle size and bulk density) and pedotransfer functions to estimate water content at saturation, field capacity, and wilting point for each installation location and depth. We also extracted the same reference points from the sensor readings, when available. Using these reference points, we re-scaled the sensor readings, such that water content was restricted to the range of values that we would expect given the physical properties of the soil. The re-calibration accuracy was assessed with volumetric water content measurements obtained from field-sampled cores taken on multiple dates. In general, the re-calibration was most accurate when all three reference points (saturation, field capacity, and wilting

  5. Modeling Behavior in Different Delay Match to Sample Tasksin One Simple Network

    Directory of Open Access Journals (Sweden)

    Yali eAmit

    2013-07-01

    Full Text Available Delay match to sample (DMS experiments provide an important link between the theory of recurrent network models and behavior and neural recordings. We define a simple recurrent network of binary neurons with stochastic neural dynamics and Hebbian synaptic learning. Most DMS experiments involve heavily learned images, and in this setting we propose a readout mechanism for match occurrence based on a smaller increment in overall network activity when the matched pattern is already in working memory, and a reset mechanism to clear memory from stimuli of previous trials using random network activity. Simulations show that this model accounts for a wide range of variations on the original DMS tasks, including ABBA tasks with distractors, and more general repetition detection tasks with both learned and novel images. The differences in network settings required for different tasks derive from easily defined changes in the levels of noise and inhibition. The same models can also explain experiments involving repetition detection with novel images, although in this case the readout mechanism for match is based on higher overall network activity. The models give rise to interesting predictions that may be tested in neural recordings.

  6. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  7. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  8. Design, analysis, and interpretation of field quality-control data for water-sampling projects

    Science.gov (United States)

    Mueller, David K.; Schertz, Terry L.; Martin, Jeffrey D.; Sandstrom, Mark W.

    2015-01-01

    The process of obtaining and analyzing water samples from the environment includes a number of steps that can affect the reported result. The equipment used to collect and filter samples, the bottles used for specific subsamples, any added preservatives, sample storage in the field, and shipment to the laboratory have the potential to affect how accurately samples represent the environment from which they were collected. During the early 1990s, the U.S. Geological Survey implemented policies to include the routine collection of quality-control samples in order to evaluate these effects and to ensure that water-quality data were adequately representing environmental conditions. Since that time, the U.S. Geological Survey Office of Water Quality has provided training in how to design effective field quality-control sampling programs and how to evaluate the resultant quality-control data. This report documents that training material and provides a reference for methods used to analyze quality-control data.

  9. A method for under-sampled ecological network data analysis: plant-pollination as case study

    Directory of Open Access Journals (Sweden)

    Peter B. Sorensen

    2012-01-01

    Full Text Available In this paper, we develop a method, termed the Interaction Distribution (ID method, for analysis of quantitative ecological network data. In many cases, quantitative network data sets are under-sampled, i.e. many interactions are poorly sampled or remain unobserved. Hence, the output of statistical analyses may fail to differentiate between patterns that are statistical artefacts and those which are real characteristics of ecological networks. The ID method can support assessment and inference of under-sampled ecological network data. In the current paper, we illustrate and discuss the ID method based on the properties of plant-animal pollination data sets of flower visitation frequencies. However, the ID method may be applied to other types of ecological networks. The method can supplement existing network analyses based on two definitions of the underlying probabilities for each combination of pollinator and plant species: (1, pi,j: the probability for a visit made by the i’th pollinator species to take place on the j’th plant species; (2, qi,j: the probability for a visit received by the j’th plant species to be made by the i’th pollinator. The method applies the Dirichlet distribution to estimate these two probabilities, based on a given empirical data set. The estimated mean values for pi,j and qi,j reflect the relative differences between recorded numbers of visits for different pollinator and plant species, and the estimated uncertainty of pi,j and qi,j decreases with higher numbers of recorded visits.

  10. Accuracy and Effort of Interpolation and Sampling: Can GIS Help Lower Field Costs?

    Directory of Open Access Journals (Sweden)

    Greg Simpson

    2014-12-01

    Full Text Available Sedimentation is a problem for all reservoirs in the Black Hills of South Dakota. Before working on sediment removal, a survey on the extent and distribution of the sediment is needed. Two sample lakes were used to determine which of three interpolation methods gave the most accurate volume results. A secondary goal was to see if fewer samples could be taken while still providing similar results. The smaller samples would mean less field time and thus lower costs. Subsamples of 50%, 33% and 25% were taken from the total samples and evaluated for the lowest Root Mean Squared Error values. Throughout the trials, the larger sample sizes generally showed better accuracy than smaller samples. Graphing the sediment volume estimates of the full sample, 50%, 33% and 25% showed little improvement after a sample of approximately 40%–50% when comparing the asymptote of the separate samples. When we used smaller subsamples the predicted sediment volumes were normally greater than the full sample volumes. It is suggested that when planning future sediment surveys, workers plan on gathering data at approximately every 5.21 meters. These sample sizes can be cut in half and still retain relative accuracy if time savings are needed. Volume estimates may slightly suffer with these reduced samples sizes, but the field work savings can be of benefit. Results from these surveys are used in prioritization of available funds for reclamation efforts.

  11. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  12. Computational ligand-based rational design: Role of conformational sampling and force fields in model development.

    Science.gov (United States)

    Shim, Jihyun; Mackerell, Alexander D

    2011-05-01

    A significant number of drug discovery efforts are based on natural products or high throughput screens from which compounds showing potential therapeutic effects are identified without knowledge of the target molecule or its 3D structure. In such cases computational ligand-based drug design (LBDD) can accelerate the drug discovery processes. LBDD is a general approach to elucidate the relationship of a compound's structure and physicochemical attributes to its biological activity. The resulting structure-activity relationship (SAR) may then act as the basis for the prediction of compounds with improved biological attributes. LBDD methods range from pharmacophore models identifying essential features of ligands responsible for their activity, quantitative structure-activity relationships (QSAR) yielding quantitative estimates of activities based on physiochemical properties, and to similarity searching, which explores compounds with similar properties as well as various combinations of the above. A number of recent LBDD approaches involve the use of multiple conformations of the ligands being studied. One of the basic components to generate multiple conformations in LBDD is molecular mechanics (MM), which apply an empirical energy function to relate conformation to energies and forces. The collection of conformations for ligands is then combined with functional data using methods ranging from regression analysis to neural networks, from which the SAR is determined. Accordingly, for effective application of LBDD for SAR determinations it is important that the compounds be accurately modelled such that the appropriate range of conformations accessible to the ligands is identified. Such accurate modelling is largely based on use of the appropriate empirical force field for the molecules being investigated and the approaches used to generate the conformations. The present chapter includes a brief overview of currently used SAR methods in LBDD followed by a more

  13. Vesicular exanthema of swine virus: isolation and serotyping of field samples.

    OpenAIRE

    Edwards, J F; Yedloutschnig, R J; Dardiri, A H; Callis, J. J.

    1987-01-01

    Virus isolation was attempted from 262 field samples of vesicular material collected during the outbreaks of vesicular exanthema of swine in the U.S.A. from 1952-54. Using primary swine kidney culture, viral cytopathogenic agents were isolated from 76.3% of the samples. However, an overall recovery rate of 82.1% was obtained after samples negative in tissue culture were inoculated intradermally in susceptible swine. All vesicular exanthema of swine virus isolates were identified as serotype B...

  14. Sampling and Reconstruction of the Pupil and Electric Field for Phase Retrieval

    Science.gov (United States)

    Dean, Bruce; Smith, Jeffrey; Aronstein, David

    2012-01-01

    This technology is based on sampling considerations for a band-limited function, which has application to optical estimation generally, and to phase retrieval specifically. The analysis begins with the observation that the Fourier transform of an optical aperture function (pupil) can be implemented with minimal aliasing for Q values down to Q = 1. The sampling ratio, Q, is defined as the ratio of the sampling frequency to the band-limited cut-off frequency. The analytical results are given using a 1-d aperture function, and with the electric field defined by the band-limited sinc(x) function. Perfect reconstruction of the Fourier transform (electric field) is derived using the Whittaker-Shannon sampling theorem for 1field with no aliasing, which has been extended to 2-d optical apertures.

  15. Non-abelian Gauge Fields from Defects in Spin-Networks

    CERN Document Server

    Vaid, Deepak

    2013-01-01

    \\emph{Effective} gauge fields arise in the description of the dynamics of defects in lattices of graphene in condensed matter. The interactions between neighboring nodes of a lattice/spin-network are described by the Hubbard model whose effective field theory at long distances is given by the Dirac equation for an \\emph{emergent} gauge field. The spin-networks in question can be used to describe the geometry experienced by a non-inertial observer in flat spacetime moving at a constant acceleration in a given direction. We expect such spin-networks to describe the structure of quantum horizons of black holes in loop quantum gravity. We argue that the abelian and non-abelian gauge fields of the Standard Model can be identified with the emergent degrees of freedom required to describe the dynamics of defects in symmetry reduced spin-networks.

  16. The challenge of social networking in the field of environment and health

    Science.gov (United States)

    2012-01-01

    Background The fields of environment and health are both interdisciplinary and trans-disciplinary, and until recently had little engagement in social networking designed to cross disciplinary boundaries. The EU FP6 project HENVINET aimed to establish integrated social network and networking facilities for multiple stakeholders in environment and health. The underlying assumption is that increased social networking across disciplines and sectors will enhance the quality of both problem knowledge and problem solving, by facilitating interactions. Inter- and trans-disciplinary networks are considered useful for this purpose. This does not mean that such networks are easily organized, as openness to such cooperation and exchange is often difficult to ascertain. Methods Different methods may enhance network building. Using a mixed method approach, a diversity of actions were used in order to investigate the main research question: which kind of social networking activities and structures can best support the objective of enhanced inter- and trans-disciplinary cooperation and exchange in the fields of environment and health. HENVINET applied interviews, a role playing session, a personal response system, a stakeholder workshop and a social networking portal as part of the process of building an interdisciplinary and trans-disciplinary network. Results The interviews provided support for the specification of requirements for an interdisciplinary and trans-disciplinary network. The role playing session, the personal response system and the stakeholder workshop were assessed as useful tools in forming such network, by increasing the awareness by different disciplines of other’s positions. The social networking portal was particularly useful in delivering knowledge, but the role of the scientist in social networking is not yet clear. Conclusions The main challenge in the field of environment and health is not so much a lack of scientific problem knowledge, but rather the

  17. The challenge of social networking in the field of environment and health.

    Science.gov (United States)

    van den Hazel, Peter; Keune, Hans; Randall, Scott; Yang, Aileen; Ludlow, David; Bartonova, Alena

    2012-06-28

    The fields of environment and health are both interdisciplinary and trans-disciplinary, and until recently had little engagement in social networking designed to cross disciplinary boundaries. The EU FP6 project HENVINET aimed to establish integrated social network and networking facilities for multiple stakeholders in environment and health. The underlying assumption is that increased social networking across disciplines and sectors will enhance the quality of both problem knowledge and problem solving, by facilitating interactions. Inter- and trans-disciplinary networks are considered useful for this purpose. This does not mean that such networks are easily organized, as openness to such cooperation and exchange is often difficult to ascertain. Different methods may enhance network building. Using a mixed method approach, a diversity of actions were used in order to investigate the main research question: which kind of social networking activities and structures can best support the objective of enhanced inter- and trans-disciplinary cooperation and exchange in the fields of environment and health. HENVINET applied interviews, a role playing session, a personal response system, a stakeholder workshop and a social networking portal as part of the process of building an interdisciplinary and trans-disciplinary network. The interviews provided support for the specification of requirements for an interdisciplinary and trans-disciplinary network. The role playing session, the personal response system and the stakeholder workshop were assessed as useful tools in forming such network, by increasing the awareness by different disciplines of other's positions. The social networking portal was particularly useful in delivering knowledge, but the role of the scientist in social networking is not yet clear. The main challenge in the field of environment and health is not so much a lack of scientific problem knowledge, but rather the ability to effectively communicate, share

  18. Impacts of Sample Design for Validation Data on the Accuracy of Feedforward Neural Network Classification

    Directory of Open Access Journals (Sweden)

    Giles M. Foody

    2017-08-01

    Full Text Available Validation data are often used to evaluate the performance of a trained neural network and used in the selection of a network deemed optimal for the task at-hand. Optimality is commonly assessed with a measure, such as overall classification accuracy. The latter is often calculated directly from a confusion matrix showing the counts of cases in the validation set with particular labelling properties. The sample design used to form the validation set can, however, influence the estimated magnitude of the accuracy. Commonly, the validation set is formed with a stratified sample to give balanced classes, but also via random sampling, which reflects class abundance. It is suggested that if the ultimate aim is to accurately classify a dataset in which the classes do vary in abundance, a validation set formed via random, rather than stratified, sampling is preferred. This is illustrated with the classification of simulated and remotely-sensed datasets. With both datasets, statistically significant differences in the accuracy with which the data could be classified arose from the use of validation sets formed via random and stratified sampling (z = 2.7 and 1.9 for the simulated and real datasets respectively, for both p < 0.05%. The accuracy of the classifications that used a stratified sample in validation were smaller, a result of cases of an abundant class being commissioned into a rarer class. Simple means to address the issue are suggested.

  19. Field test of wireless sensor network in the nuclear environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, L., E-mail: lil@aecl.ca [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada); Wang, Q.; Bari, A. [Univ. of Western Ontario, London, Ontario (Canada); Deng, C.; Chen, D. [Univ. of Electronic Science and Technology of China, Chengdu, Sichuan (China); Jiang, J. [Univ. of Western Ontario, London, Ontario (Canada); Alexander, Q.; Sur, B. [Atomic Energy of Canada Limited, Chalk River, Ontario (Canada)

    2014-06-15

    Wireless sensor networks (WSNs) are appealing options for the health monitoring of nuclear power plants due to their low cost and flexibility. Before they can be used in highly regulated nuclear environments, their reliability in the nuclear environment and compatibility with existing devices have to be assessed. In situ electromagnetic interference tests, wireless signal propagation tests, and nuclear radiation hardness tests conducted on candidate WSN systems at AECL Chalk River Labs are presented. The results are favourable to WSN in nuclear applications. (author)

  20. What Big Data tells: Sampling the social network by communication channels.

    Science.gov (United States)

    Török, János; Murase, Yohsuke; Jo, Hang-Hyun; Kertész, János; Kaski, Kimmo

    2016-11-01

    Big Data has become the primary source of understanding the structure and dynamics of the society at large scale. The network of social interactions can be considered as a multiplex, where each layer corresponds to one communication channel and the aggregate of all of them constitutes the entire social network. However, usually one has information only about one of the channels or even a part of it, which should be considered as a subset or sample of the whole. Here we introduce a model based on a natural bilateral communication channel selection mechanism, which for one channel leads to consistent changes in the network properties. For example, while it is expected that the degree distribution of the whole social network has a maximum at a value larger than one, we get a monotonically decreasing distribution as observed in empirical studies of single-channel data. We also find that assortativity may occur or get strengthened due to the sampling method. We analyze the far-reaching consequences of our findings.

  1. Analysis of a solar collector field water flow network

    Science.gov (United States)

    Rohde, J. E.; Knoll, R. H.

    1976-01-01

    A number of methods are presented for minimizing the water flow variation in the solar collector field for the Solar Building Test Facility at the Langley Research Center. The solar collector field investigated consisted of collector panels connected in parallel between inlet and exit collector manifolds to form 12 rows. The rows were in turn connected in parallel between the main inlet and exit field manifolds to complete the field. The various solutions considered included various size manifolds, manifold area change, different locations for the inlets and exits to the manifolds, and orifices or flow control valves. Calculations showed that flow variations of less than 5 percent were obtainable both inside a row between solar collector panels and between various rows.

  2. Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks

    OpenAIRE

    Jia, Tao; Barab?si, Albert-L?szl?

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. Thi...

  3. Sampling Design of Soil Physical Properties in a Conilon Coffee Field

    Directory of Open Access Journals (Sweden)

    Eduardo Oliveira de Jesus Santos

    Full Text Available ABSTRACT Establishing the number of samples required to determine values of soil physical properties ultimately results in optimization of labor and allows better representation of such attributes. The objective of this study was to analyze the spatial variability of soil physical properties in a Conilon coffee field and propose a soil sampling method better attuned to conditions of the management system. The experiment was performed in a Conilon coffee field in Espírito Santo state, Brazil, under a 3.0 × 2.0 × 1.0 m (4,000 plants ha-1 double spacing design. An irregular grid, with dimensions of 107 × 95.7 m and 65 sampling points, was set up. Soil samples were collected from the 0.00-0.20 m depth from each sampling point. Data were analyzed under descriptive statistical and geostatistical methods. Using statistical parameters, the adequate number of samples for analyzing the attributes under study was established, which ranged from 1 to 11 sampling points. With the exception of particle density, all soil physical properties showed a spatial dependence structure best fitted to the spherical model. Establishment of the number of samples and spatial variability for the physical properties of soils may be useful in developing sampling strategies that minimize costs for farmers within a tolerable and predictable level of error.

  4. A simplified field protocol for genetic sampling of birds using buccal swabs

    Science.gov (United States)

    Vilstrup, Julia T.; Mullins, Thomas D.; Miller, Mark P.; McDearman, Will; Walters, Jeffrey R.; Haig, Susan M.

    2018-01-01

    DNA sampling is an essential prerequisite for conducting population genetic studies. For many years, blood sampling has been the preferred method for obtaining DNA in birds because of their nucleated red blood cells. Nonetheless, use of buccal swabs has been gaining favor because they are less invasive yet still yield adequate amounts of DNA for amplifying mitochondrial and nuclear markers; however, buccal swab protocols often include steps (e.g., extended air-drying and storage under frozen conditions) not easily adapted to field settings. Furthermore, commercial extraction kits and swabs for buccal sampling can be expensive for large population studies. We therefore developed an efficient, cost-effective, and field-friendly protocol for sampling wild birds after comparing DNA yield among 3 inexpensive buccal swab types (2 with foam tips and 1 with a cotton tip). Extraction and amplification success was high (100% and 97.2% respectively) using inexpensive generic swabs. We found foam-tipped swabs provided higher DNA yields than cotton-tipped swabs. We further determined that omitting a drying step and storing swabs in Longmire buffer increased efficiency in the field while still yielding sufficient amounts of DNA for detailed population genetic studies using mitochondrial and nuclear markers. This new field protocol allows time- and cost-effective DNA sampling of juveniles or small-bodied birds for which drawing blood may cause excessive stress to birds and technicians alike.

  5. Field sampling and selecting on-site analytical methods for explosives in soil

    Energy Technology Data Exchange (ETDEWEB)

    Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.

    1996-12-01

    A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling because of the detonation potential. Characterization of explosives-contaminated sites is particularly difficult because of the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of the samples, and extracting larger samples. This publication is intended to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods for detecting and quantifying secondary explosive compounds in soils, and is not intended to include discussions of the safety issues associated with sites contaminated with explosive residues.

  6. Data splitting for artificial neural networks using SOM-based stratified sampling.

    Science.gov (United States)

    May, R J; Maier, H R; Dandy, G C

    2010-03-01

    Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.

  7. Ensemble of Neural Network Conditional Random Fields for Self-Paced Brain Computer Interfaces

    Directory of Open Access Journals (Sweden)

    Hossein Bashashati

    2017-07-01

    Full Text Available Classification of EEG signals in self-paced Brain Computer Interfaces (BCI is an extremely challenging task. The main difficulty stems from the fact that start time of a control task is not defined. Therefore it is imperative to exploit the characteristics of the EEG data to the extent possible. In sensory motor self-paced BCIs, while performing the mental task, the user’s brain goes through several well-defined internal state changes. Applying appropriate classifiers that can capture these state changes and exploit the temporal correlation in EEG data can enhance the performance of the BCI. In this paper, we propose an ensemble learning approach for self-paced BCIs. We use Bayesian optimization to train several different classifiers on different parts of the BCI hyper- parameter space. We call each of these classifiers Neural Network Conditional Random Field (NNCRF. NNCRF is a combination of a neural network and conditional random field (CRF. As in the standard CRF, NNCRF is able to model the correlation between adjacent EEG samples. However, NNCRF can also model the nonlinear dependencies between the input and the output, which makes it more powerful than the standard CRF. We compare the performance of our algorithm to those of three popular sequence labeling algorithms (Hidden Markov Models, Hidden Markov Support Vector Machines and CRF, and to two classical classifiers (Logistic Regression and Support Vector Machines. The classifiers are compared for the two cases: when the ensemble learning approach is not used and when it is. The data used in our studies are those from the BCI competition IV and the SM2 dataset. We show that our algorithm is considerably superior to the other approaches in terms of the Area Under the Curve (AUC of the BCI system.

  8. Field-effect flow control for microfabricated fluidic networks

    NARCIS (Netherlands)

    Schasfoort, Richardus B.M.; Schlautmann, Stefan; Hendrikse, J.; van den Berg, Albert

    1999-01-01

    The magnitude and direction of the electro-osmotic flow (EOF) inside a microfabricated fluid channel can be controlled by a perpendicular electric field of 1.5 megavolts per centimeter generated by a voltage of only 50 volts. A microdevice called a "flowFET," with functionality comparable to that of

  9. Social networking and individual outcomes beyond the mean field case

    NARCIS (Netherlands)

    Ioannides, Y.M.; Soetevent, A.R.

    2007-01-01

    We study individually optimized continuous outcomes in a dynamic environment in the presence of social interactions, and where the interaction topology may be either exogenous and time varying, or endogenous. The model accommodates more general social effects than those of the mean-field type. We

  10. Using Social Network Analysis to Better Understand Compulsive Exercise Behavior Among a Sample of Sorority Members.

    Science.gov (United States)

    Patterson, Megan S; Goodson, Patricia

    2017-05-01

    Compulsive exercise, a form of unhealthy exercise often associated with prioritizing exercise and feeling guilty when exercise is missed, is a common precursor to and symptom of eating disorders. College-aged women are at high risk of exercising compulsively compared with other groups. Social network analysis (SNA) is a theoretical perspective and methodology allowing researchers to observe the effects of relational dynamics on the behaviors of people. SNA was used to assess the relationship between compulsive exercise and body dissatisfaction, physical activity, and network variables. Descriptive statistics were conducted using SPSS, and quadratic assignment procedure (QAP) analyses were conducted using UCINET. QAP regression analysis revealed a statistically significant model (R 2 = .375, P QAP regression model. In our sample, women who are connected to "important" or "powerful" people in their network are likely to have higher compulsive exercise scores. This result provides healthcare practitioners key target points for intervention within similar groups of women. For scholars researching eating disorders and associated behaviors, this study supports looking into group dynamics and network structure in conjunction with body dissatisfaction and exercise frequency.

  11. Sampling mosquitoes with CDC light trap in rice field and plantation ...

    African Journals Online (AJOL)

    Mosquito species were sampled to determine the mosquito composition and abundance in rice field and plantation communities in Ogun State Nigeria. Mosquitoes were caught once weekly from four selected houses in each of the two communities by means of CDC light traps. A total of 47,501 mosquitoes representing ...

  12. Lensless coherent imaging by sampling of the optical field with digital micromirror device

    NARCIS (Netherlands)

    Vdovine, G.V.; Gong, H.; Soloviev, O.A.; Pozzi, P.; Verhaegen, M.H.G.

    2015-01-01

    We have experimentally demonstrated a lensless coherent microscope based on direct registration of the complex optical field by sampling the pupil with a sequence of two-point interferometers formed by a digital micromirror device. Complete registration of the complex amplitude in the pupil of the

  13. A spruce budworm sampling program for HUSKY HUNTER field data recorders.

    Science.gov (United States)

    Fred H. Schmidt

    1992-01-01

    A program for receiving sampling data for all immature stages of the western spruce budworm (Choristoneura occidentals Freeman) is described. Versions were designed to be used on field data recorders with either CP/M or DOS operating systems, such as the HUSKY HUNTER (Models 1, 2, and 16), but they also may be used on personal computers with compatible operating...

  14. Minimal BRDF Sampling for Two-Shot Near-Field Reflectance Acquisition

    DEFF Research Database (Denmark)

    Xu, Zexiang; Nielsen, Jannik Boll; Yu, Jiyang

    2016-01-01

    We develop a method to acquire the BRDF of a homogeneous flat sample from only two images, taken by a near-field perspective camera, and lit by a directional light source. Our method uses the MERL BRDF database to determine the optimal set of lightview pairs for data-driven reflectance acquisition...

  15. Bending stress- and magnetic field-dependence of Ic in JFCA-RRT samples

    Science.gov (United States)

    Noto, K.; Fujine, Y.; Sato, T.; Shirato, S.; Nagasawa, Y.; Kikegawa, T.; Watanabe, K.; Kimura, Y.; Kaneko, T.; Kimura, A.

    2002-10-01

    Japan Fine Ceramics Association has carried out a round robin test (RRT) on the bending strain ( εb) dependence of the critical current Ic at 77 K in three kinds of Bi(2223)/Ag tape samples (VAM-1, JFC-1, JFC-2; three samples each) for future standardization. We measured Ic( εb) ( εb: 0-1.0%) as one of RRT participants and also measured the magnetic field dependence of Ic under several bending strains mentioned above as optional measurements. As results, we found a very fast decrease of Ic in low fields up to 0.5 T and then a gradual decrease up to 1.5-2.0 T. Ic maintains 0.9-0.95 of its initial value up to εb=0.4% strain and then decreases a little faster down to 0.60-0.65 at εb=1.0% for almost all samples and magnetic fields. The normalized pinning force F p/F p max shows scaling according to the expression F p/F p max∝(B/B irr)(1-(B/B irr)) 3 for all samples and bending strains, where Birr is the irreversibility field.

  16. A recurrent neural network for classification of unevenly sampled variable stars

    Science.gov (United States)

    Naul, Brett; Bloom, Joshua S.; Pérez, Fernando; van der Walt, Stéfan

    2018-02-01

    Astronomical surveys of celestial sources produce streams of noisy time series measuring flux versus time (`light curves'). Unlike in many other physical domains, however, large (and source-specific) temporal gaps in data arise naturally due to intranight cadence choices as well as diurnal and seasonal constraints1-5. With nightly observations of millions of variable stars and transients from upcoming surveys4,6, efficient and accurate discovery and classification techniques on noisy, irregularly sampled data must be employed with minimal human-in-the-loop involvement. Machine learning for inference tasks on such data traditionally requires the laborious hand-coding of domain-specific numerical summaries of raw data (`features')7. Here, we present a novel unsupervised autoencoding recurrent neural network8 that makes explicit use of sampling times and known heteroskedastic noise properties. When trained on optical variable star catalogues, this network produces supervised classification models that rival other best-in-class approaches. We find that autoencoded features learned in one time-domain survey perform nearly as well when applied to another survey. These networks can continue to learn from new unlabelled observations and may be used in other unsupervised tasks, such as forecasting and anomaly detection.

  17. [Diagnostic yield of paediatric respiratory samples in the Balearic Islands Sentinel Influenza Surveillance Network].

    Science.gov (United States)

    Reina, J; Nicolau, A; Galmes, A; Arbona, B

    2009-05-01

    Influenza disease is subjected to surveillance by national networks (RC) that predict the epidemic behaviour by reporting clinical and virological data. To evaluate the effectiveness of the paediatric respiratory samples in the Balearic Islands RC in the last five epidemic seasons. A breath sample was taken from paediatric patients in the RC who had flu symptoms. The samples were inoculated in the MDCK cell line. We reviewed the epidemiological data of patients with a culture positive to influenza A and B. A total of 338 pharyngeal swabs from the RC were analysed during the study period. Of these, 65 (19.3%) belonged to <14 years old patients, and 44.6% of the samples were positive as opposed to 39.1% of adult respiratory samples. The influenza A virus was isolated in 24 paediatric samples (82.7%) and the influenza B virus in 5 (17.3%). The mean age of the paediatric patients of the RC who were positive was 8.5 years. Only 3 patients in the 0-4 year old group were positive (10.3%) and 26 patients (89.7%) in the 5-14 years old group. In spite that paediatricians represented only 22% of the RC doctors and obtain the 19.3% of all respiratory samples, the percentage and effectiveness of these is higher that that obtained in the adult population.

  18. Potential, velocity, and density fields from sparse and noisy redshift-distance samples - Method

    Science.gov (United States)

    Dekel, Avishai; Bertschinger, Edmund; Faber, Sandra M.

    1990-01-01

    A method for recovering the three-dimensional potential, velocity, and density fields from large-scale redshift-distance samples is described. Galaxies are taken as tracers of the velocity field, not of the mass. The density field and the initial conditions are calculated using an iterative procedure that applies the no-vorticity assumption at an initial time and uses the Zel'dovich approximation to relate initial and final positions of particles on a grid. The method is tested using a cosmological N-body simulation 'observed' at the positions of real galaxies in a redshift-distance sample, taking into account their distance measurement errors. Malmquist bias and other systematic and statistical errors are extensively explored using both analytical techniques and Monte Carlo simulations.

  19. Sampling the sound field in auditoria using large natural-scale array measurements.

    Science.gov (United States)

    Witew, Ingo B; Vorländer, Michael; Xiang, Ning

    2017-03-01

    Suitable data for spatial wave field analyses in concert halls need to satisfy the sampling theorem and hence requires densely spaced measurement positions over extended regions. The described measurement apparatus is capable of automatically sampling the sound field in auditoria over a surface of 5.30 m × 8.00 m to any appointed resolutions. In addition to discussing design features, a case study based on measured impulse responses is presented. The experimental data allow wave field animations demonstrating how sound propagating at grazing incidence over theater seating is scattered from rows of chairs (seat-dip effect). The visualized data of reflections and scattering from an auditorium's boundaries give insights and opportunities for advanced analyses.

  20. Separation and characterization of nanoparticles in complex food and environmental samples by field-flow fractionation

    DEFF Research Database (Denmark)

    Kammer, Frank von der; Legros, Samuel; Hofmann, Thilo

    2011-01-01

    has been applied for separation of various types of NP (e.g., organic macromolecules, and carbonaceous or inorganic NPs) in different types of media (e.g., natural waters, soil extracts or food samples).FFF can be coupled to different types of detectors that offer additional information...... sample preparation, field-flow fractionation (FFF) is one of the most promising techniques to achieve relevant characterization.The objective of this review is to present the current status of FFF as an analytical separation technique for the study of NPs in complex food and environmental samples. FFF...... constituents in the samples require contradictory separation conditions. The potential of FFF analysis should always be evaluated bearing in mind the impact of the necessary sample preparation, the information that can be retrieved from the chosen detection systems and the influence of the chosen separation...

  1. Working without accumulation membrane in flow field-flow fractionation. Effect of sample loading on retention.

    Science.gov (United States)

    Melucci, Dora; Zattoni, Andrea; Casolari, Sonia; Reggiani, Matteo; Sanz, Ramses; Reschiglian, Pierluigi; Torsi, Giancarlo

    2004-03-01

    Membraneless hyperlayer flow field-flow fractionation (Hyp FIFFF) has shown improved performance with respect to Hyp FIFFF with membrane. The conditions for high recovery and recovery independent of sample loading in membraneless Hyp FIFFF have been previously determined. The effect of sample loading should be also investigated in order to optimize the form of the peaks for real samples. The effect of sample loading on peak retention parameters is of prime importance in applications such as the conversion of peaks into particle size distributions. In this paper, a systematic experimental work is performed in order to study the effect of sample loading on retention parameters. A procedure to regenerate the frit operating as accumulation wall is described. High reproducibility is obtained with low system conditioning time.

  2. Direct Contact Sorptive Extraction: A Robust Method for Sampling Plant Volatiles in the Field.

    Science.gov (United States)

    Kfoury, Nicole; Scott, Eric; Orians, Colin; Robbat, Albert

    2017-09-27

    Plants produce volatile organic compounds (VOCs) with diverse structures and functions, which change in response to environmental stimuli and have important consequences for interactions with other organisms. To understand these changes, in situ sampling is necessary. In contrast to dynamic headspace (DHS), which is the most often employed method, direct contact sampling employing a magnetic stir bar held in place by a magnet eliminates artifacts produced by enclosing plant materials in glass or plastic chambers. Direct-contact sorptive extraction (DCSE) using polydimethylsiloxane coated stir bars (Twisters) coated stir bars is more sensitive than DHS, captures a wider range of compounds, minimizes VOC collection from neighboring plants, and distinguishes the effects of herbivory in controlled and field conditions. Because DCSE is relatively inexpensive and simple to employ, scalability of field trials can be expanded concomitant with increased sample replication. The sensitivity of DCSE combined with the spectral deconvolution data analysis software makes the two ideal for comprehensive, in situ profiling of plant volatiles.

  3. Control capacity and a random sampling method in exploring controllability of complex networks.

    Science.gov (United States)

    Jia, Tao; Barabási, Albert-László

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems.

  4. Distribution of AC Contact Network Electric Field Strenght

    Directory of Open Access Journals (Sweden)

    Antonio Andonov

    2004-01-01

    Full Text Available To provide the stock electromagnetics compatibility is a serious problem with the contemporary development of the railway transport and implementation of lines for connection. The AS contact system is on of the main equipment of the electrify railway transport that implements the electrical connection between the traction substations and the roiling stock. But it is also one of the main sources of interference due to the presence of its strong electromagnetic field. The paper present an distribution of electric intensity by contact system.

  5. A Clustering Protocol for Wireless Sensor Networks Based on Energy Potential Field

    Directory of Open Access Journals (Sweden)

    Zuo Chen

    2013-01-01

    Full Text Available It is the core issue of researching that how to prolong the lifetime of wireless sensor network. The purpose of this paper is to illustrate a clustering protocol LEACH-PF, which is a multihop routing algorithm with energy potential field of divided clusters. In LEACH-PF, the network is divided into a number of subnetworks and each subnetwork has a cluster head. These clusters construct an intercluster routing tree according to the potential difference of different equipotential fields. The other member nodes of the subnetworks communicate with their cluster head directly, so as to complete regional coverage. The results of simulation show that LEACH-PF can reduce energy consumption of the network effectively and prolong the network lifetime.

  6. A field evaluation of a satellite microwave rainfall sensor network

    Science.gov (United States)

    Caridi, Andrea; Caviglia, Daniele D.; Colli, Matteo; Delucchi, Alessandro; Federici, Bianca; Lanza, Luca G.; Pastorino, Matteo; Randazzo, Andrea; Sguerso, Domenico

    2017-04-01

    An innovative environmental monitoring system - Smart Rainfall System (SRS) - that estimates rainfall in real-time by means of the analysis of the attenuation of satellite signals (DVB-S in the microwave Ku band) is presented. Such a system consists in a set of peripheral microwave sensors placed on the field of interest, and connected to a central processing and analysis node. It has been developed jointly by the University of Genoa, with its departments DITEN and DICCA and the Genoese SME "Darts Engineering Srl". This work discusses the rainfall intensity measurements accuracy and sensitivity performance of SRS, based on preliminary results from a field comparison experiment at the urban scale. The test-bed is composed by a set of preliminary measurement sites established from Autumn 2016 in the Genoa (Italy) municipality and the data collected from the sensors during a selection of rainfall events is studied. The availability of point-scale rainfall intensity measurements made by traditional tipping-bucket rain gauges and radar areal observations allows a comparative analysis of the SRS performance. The calibration of the reference rain gauges has been carried out at the laboratories of DICCA using a rainfall simulator and the measurements have been processed taking advantage of advanced algorithms to reduce counting errors. The experimental set-up allows a fine tuning of the retrieval algorithm and a full characterization of the accuracy of the rainfall intensity estimates from the microwave signal attenuation as a function of different precipitation regimes.

  7. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  8. Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Paul H. Evangelista

    2011-05-01

    Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  9. A biophysical observation model for field potentials of networks of leaky integrate-and-fire neurons

    Directory of Open Access Journals (Sweden)

    Peter ebeim Graben

    2013-01-01

    Full Text Available We present a biophysical approach for the coupling of neural network activity as resulting from proper dipole currents of cortical pyramidal neurons to the electric field in extracellular fluid. Starting from a reduced three-compartment model of a single pyramidal neuron, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. This work aligns and satisfies the widespread dipole assumption that is motivated by the "open-field" configuration of the dendritic field potential around cortical pyramidal cells. Our reduced three-compartment scheme allows to derive networks of leaky integrate-and-fire models, which facilitates comparison with existing neural network and observation models. In particular, by means of numerical simulations we compare our approach with an ad hoc model by Mazzoni et al. [Mazzoni, A., S. Panzeri, N. K. Logothetis, and N. Brunel (2008. Encoding of naturalistic stimuli by local field potential spectra in networks of excitatory and inhibitory neurons. PLoS Computational Biology, 4(12, e1000239], and conclude that our biophysically motivated approach yields substantial improvement.

  10. AN/VRC 118 Mid-Tier Networking Vehicular Radio (MNVR) and Joint Enterprise Network Manager (JENM) Early Fielding Report

    Science.gov (United States)

    2017-01-18

    requirements. The Army intends to conduct the MNVR Initial Operational Test and Evaluation ( IOT &E) with the new radio in FY21 to support a fielding decision...retransmission vehicles requires the battalion to provide security , which 3  reduces the unit’s available combat power. During the 2015 MNVR LUT...the 1st Battalion, 6th Infantry diverted up to 10 percent of its combat power to provide security for the mid-tier network retransmission vehicles

  11. An Evaluation of Plotless Sampling Using Vegetation Simulations and Field Data from a Mangrove Forest.

    Directory of Open Access Journals (Sweden)

    Renske Hijbeek

    Full Text Available In vegetation science and forest management, tree density is often used as a variable. To determine the value of this variable, reliable field methods are necessary. When vegetation is sparse or not easily accessible, the use of sample plots is not feasible in the field. Therefore, plotless methods, like the Point Centred Quarter Method, are often used as an alternative. In this study we investigate the accuracy of different plotless sampling methods. To this end, tree densities of a mangrove forest were determined and compared with estimates provided by several plotless methods. None of these methods proved accurate across all field sites with mean underestimations up to 97% and mean overestimations up to 53% in the field. Applying the methods to different vegetation patterns shows that when random spatial distributions were used the true density was included within the 95% confidence limits of all the plotless methods tested. It was also found that, besides aggregation and regularity, density trends often found in mangroves contribute to the unreliability. This outcome raises questions about the use of plotless sampling in forest monitoring and management, as well as for estimates of density-based carbon sequestration. We give recommendations to minimize errors in vegetation surveys and recommendations for further in-depth research.

  12. Exact mean field dynamics for epidemic-like processes on heterogeneous networks

    CERN Document Server

    Lucas, Andrew

    2012-01-01

    We show that the mean field equations for the SIR epidemic can be exactly solved for a network with arbitrary degree distribution. Our exact solution consists of reducing the dynamics to a lone first order differential equation, which has a solution in terms of an integral over functions dependent on the degree distribution of the network, and reconstructing all mean field functions of interest from this integral. Irreversibility of the SIR epidemic is crucial for the solution. We also find exact solutions to the sexually transmitted disease SI epidemic on bipartite graphs, to a simplified rumor spreading model, and to a new model for recommendation spreading, via similar techniques. Numerical simulations of these processes on scale free networks demonstrate the qualitative validity of mean field theory in most regimes.

  13. Rock property estimates using multiple seismic attributes and neural networks; Pegasus Field, West Texas

    Energy Technology Data Exchange (ETDEWEB)

    Schuelke, J.S.; Quirein, J.A.; Sarg, J.F.

    1998-12-31

    This case study shows the benefit of using multiple seismic trace attributes and the pattern recognition capabilities of neural networks to predict reservoir architecture and porosity distribution in the Pegasus Field, West Texas. The study used the power of neural networks to integrate geologic, borehole and seismic data. Illustrated are the improvements between the new neural network approach and the more traditional method of seismic trace inversion for porosity estimation. Comprehensive statistical methods and interpretational/subjective measures are used in the prediction of porosity from seismic attributes. A 3-D volume of seismic derived porosity estimates for the Devonian reservoir provide a very detailed estimate of porosity, both spatially and vertically, for the field. The additional reservoir porosity detail provided, between the well control, allows for optimal placement of horizontal wells and improved field development. 6 refs., 2 figs.

  14. Fractionated dynamic headspace sampling in the analysis of matrices of vegetable origin in the food field.

    Science.gov (United States)

    Liberto, Erica; Cagliero, Cecilia; Cordero, Chiara; Rubiolo, Patrizia; Bicchi, Carlo; Sgorbini, Barbara

    2017-03-17

    Recent technological advances in dynamic headspace sampling (D-HS) and the possibility to automate this sampling method have lead to a marked improvement in its the performance, a strong renewal of interest in it, and have extended its fields of application. The introduction of in-parallel and in-series automatic multi-sampling and of new trapping materials, plus the possibility to design an effective sampling process by correctly applying the breakthrough volume theory, have make profiling more representative, and have enhanced selectivity, and flexibility, also offering the possibility of fractionated enrichment in particular for high-volatility compounds. This study deals with fractionated D-HS ability to produce a sample representative of the volatile fraction of solid or liquid matrices. Experiments were carried out on a model equimolar (0.5mM) EtOH/water solution, comprising 16 compounds with different polarities and volatilities, structures ranging from C5 to C15 and vapor pressures from 4.15kPa (2,3-pentandione) to 0.004kPa (t-β-caryophyllene), and on an Arabica roasted coffee powder. Three trapping materials were considered: Tenax TA™ (TX), Polydimethylsiloxane foam (PDMS), and a three-carbon cartridge Carbopack B/Carbopack C/Carbosieve S-III™ (CBS). The influence of several parameters on the design of successful fractionated D-HS sampling. Including the physical and chemical characteristics of analytes and matrix, trapping material, analyte breakthrough, purge gas volumes, and sampling temperature, were investigated. The results show that, by appropriately choosing sampling conditions, fractionated D-HS sampling, based on component volatility, can produce a fast and representative profile of the matrix volatile fraction, with total recoveries comparable to those obtained by full evaporation D-HS for liquid samples, and very high concentration factors for solid samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Field Trial of 40 Gb/s Optical Transport Network using Open WDM Interfaces

    DEFF Research Database (Denmark)

    Fagertun, Anna Manolova; Ruepp, Sarah Renée; Petersen, Martin Nordal

    2013-01-01

    An experimental field-trail deployment of a 40Gb/s open WDM interface in an operational network is presented, in cross-carrier interconnection scenario. Practical challenges of integration and performance measures for both native and alien channels are outlined.......An experimental field-trail deployment of a 40Gb/s open WDM interface in an operational network is presented, in cross-carrier interconnection scenario. Practical challenges of integration and performance measures for both native and alien channels are outlined....

  16. Field portable mobile phone based fluorescence microscopy for detection of Giardia lamblia cysts in water samples

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Gorocs, Zoltan; McLeod, Euan; Tseng, Derek; Ozcan, Aydogan

    2015-03-01

    Giardia lamblia is a waterborne parasite that causes an intestinal infection, known as giardiasis, and it is found not only in countries with inadequate sanitation and unsafe water but also streams and lakes of developed countries. Simple, sensitive, and rapid detection of this pathogen is important for monitoring of drinking water. Here we present a cost-effective and field portable mobile-phone based fluorescence microscopy platform designed for automated detection of Giardia lamblia cysts in large volume water samples (i.e., 10 ml) to be used in low-resource field settings. This fluorescence microscope is integrated with a disposable water-sampling cassette, which is based on a flow-through porous polycarbonate membrane and provides a wide surface area for fluorescence imaging and enumeration of the captured Giardia cysts on the membrane. Water sample of interest, containing fluorescently labeled Giardia cysts, is introduced into the absorbent pads that are in contact with the membrane in the cassette by capillary action, which eliminates the need for electrically driven flow for sample processing. Our fluorescence microscope weighs ~170 grams in total and has all the components of a regular microscope, capable of detecting individual fluorescently labeled cysts under light-emitting-diode (LED) based excitation. Including all the sample preparation, labeling and imaging steps, the entire measurement takes less than one hour for a sample volume of 10 ml. This mobile phone based compact and cost-effective fluorescent imaging platform together with its machine learning based cyst counting interface is easy to use and can even work in resource limited and field settings for spatio-temporal monitoring of water quality.

  17. Porosity, permeability and 3D fracture network characterisation of dolomite reservoir rock samples.

    Science.gov (United States)

    Voorn, Maarten; Exner, Ulrike; Barnhoorn, Auke; Baud, Patrick; Reuschlé, Thierry

    2015-03-01

    With fractured rocks making up an important part of hydrocarbon reservoirs worldwide, detailed analysis of fractures and fracture networks is essential. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) however suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this paper, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. We process the 3D μCT data in this study by a Hessian-based fracture filtering routine and can successfully extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. Additionally, thin sections made from selected plug samples provide 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) towards more realistic reservoir conditions. This study shows that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that although there are limitations, several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and

  18. Using X-ray imaging for monitoring the development of the macropore network in a soil sample exposed to natural boundary conditions

    Science.gov (United States)

    Koestel, John

    2015-04-01

    Soil macrostructure is not static but continuously modified by climatic and biological factors. Knowledge of how a macropore network evolves in an individual soil sample is however scarce because it is difficult to collect respective time-lapse data in the field. In this study I investigated whether it is reasonable to use X-ray imaging to monitor the macropore network development in a small topsoil column (10 cm high, 6.8 cm diameter) that is periodically removed from the field, X-rayed and subsequently installed back in the field. Apart from quantifying the structural changes of the macropore network in this soil sample, I investigated whether earthworms entered the soil column and whether roots grew beyond the lower bottom of the column into the subsoil. The soil was sampled from a freshly hand-ploughed allotment near Uppsala (Sweden) in the beginning of June 2013. Rucola (eruca vesicaria) was sown on the top of the column and in its vicinity. When the soil column was for the first time removed from the field and scanned in October 2013, it contained four new earthworm burrows. Root growth into the subsoil was largely absent. Over winter, in May 2014, no further earthworm burrows had formed. Instead, the macrostructure had started to disintegrate somewhat. No crop was sown in the 2014 vegetation period and the soil sample was left unploughed. In October 2014, the column contained again new earthworm burrows. Furthermore, a dandelion had established on the soil column together with some grasses. Several roots had now connected the soil column with the subsoil. The study shows that X-ray tomography offers a promising opportunity for investigating soil structure evolution, even though it cannot be directly installed in the field.

  19. Accurate segmentation of lung fields on chest radiographs using deep convolutional networks

    Science.gov (United States)

    Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory

    2017-02-01

    Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.

  20. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  1. Detection of Escherichia coli in biofilms from pipe samples and coupons in drinking water distribution networks.

    Science.gov (United States)

    Juhna, T; Birzniece, D; Larsson, S; Zulenkovs, D; Sharipo, A; Azevedo, N F; Ménard-Szczebara, F; Castagnet, S; Féliers, C; Keevil, C W

    2007-11-01

    Fluorescence in situ hybridization (FISH) was used for direct detection of Escherichia coli on pipe surfaces and coupons in drinking water distribution networks. Old cast iron main pipes were removed from water distribution networks in France, England, Portugal, and Latvia, and E. coli was analyzed in the biofilm. In addition, 44 flat coupons made of cast iron, polyvinyl chloride, or stainless steel were placed into and continuously exposed to water on 15 locations of 6 distribution networks in France and Latvia and examined after 1 to 6 months exposure to the drinking water. In order to increase the signal intensity, a peptide nucleic acid (PNA) 15-mer probe was used in the FISH screening for the presence or absence of E. coli on the surface of pipes and coupons, thus reducing occasional problems of autofluorescence and low fluorescence of the labeled bacteria. For comparison, cells were removed from the surfaces and examined with culture-based or enzymatic (detection of beta-d-glucuronidase) methods. An additional verification was made by using PCR. Culture method indicated presence of E. coli in one of five pipes, whereas all pipes were positive with the FISH methods. E. coli was detected in 56% of the coupons using PNA FISH, but no E. coli was detected using culture or enzymatic methods. PCR analyses confirmed the presence of E. coli in samples that were negative according to culture-based and enzymatic methods. The viability of E. coli cells in the samples was demonstrated by the cell elongation after resuscitation in low-nutrient medium supplemented with pipemidic acid, suggesting that the cells were present in an active but nonculturable state, unable to grow on agar media. E. coli contributed to ca. 0.001 to 0.1% of the total bacterial number in the samples. The presence and number of E. coli did not correlate with any of physical and/or chemical characteristic of the drinking water (e.g., temperature, chlorine, or biodegradable organic matter concentration

  2. Whole-field fluorescence microscope with digital micromirror device: imaging of biological samples.

    Science.gov (United States)

    Fukano, Takashi; Miyawaki, Atsushi

    2003-07-01

    We have developed a whole-field fluorescence microscope equipped with a Digital Micromirror Device to acquire optically sectioned images by using the fringe-projection technique and the phase-shift method. This system allows free control of optical sectioning strength through computer-controlled alteration of the fringe period projected onto a sample. We have employed this system to image viable cells expressing fluorescent proteins and discussed its biological applications.

  3. Detection of PRRSV in 218 field samples using six molecular methods: What we are looking for?

    DEFF Research Database (Denmark)

    Toplak, Ivan; Štukelj, Marina; Gracieux, Patrice

    2012-01-01

    Objectives The purpose of this study was to determine the sensitivity and the specificity of six molecular methods used for the detection of porcine reproductive and respiratory syndrome virus (PRRSV). Methods 218 field samples (serum, tissues) were collected between 2009 and 2011 from 50 PRRSV p......-time) Continuesly follow the genetic evaluation of especially Type I PRRSV subtype viruses and regularly update their primer sequences....

  4. Cortical information flow in Parkinson's disease: a composite network/field model

    Directory of Open Access Journals (Sweden)

    Cliff C. Kerr

    2013-04-01

    Full Text Available The basal ganglia play a crucial role in the execution of movements, as demonstrated by the severe motor deficits that accompany Parkinson's disease (PD. Since motor commands originate in the cortex, an important question is how the basal ganglia influence cortical information flow, and how this influence becomes pathological in PD. To explore this, we developed a composite neuronal network/neural field model. The network model consisted of 4950 spiking neurons, divided into 15 excitatory and inhibitory cell populations in the thalamus and cortex. The field model consisted of the cortex, thalamus, striatum, subthalamic nucleus, and globus pallidus. Both models have been separately validated in previous work. Three field models were used: one with basal ganglia parameters based on data from healthy individuals, one based on data from individuals with PD, and one purely thalamocortical model. Spikes generated by these field models were then used to drive the network model. Compared to the network driven by the healthy model, the PD-driven network had lower firing rates, a shift in spectral power towards lower frequencies, and higher probability of bursting; each of these findings is consistent with empirical data on PD. In the healthy model, we found strong Granger causality in the beta and low gamma bands between cortical layers, but this was largely absent in the PD model. In particular, the reduction in Granger causality from the main "input" layer of the cortex (layer 4 to the main "output" layer (layer 5 was pronounced. This may account for symptoms of PD that seem to reflect deficits in information flow, such as bradykinesia. In general, these results demonstrate that the brain's large-scale oscillatory environment, represented here by the field model, strongly influences the information processing that occurs within its subnetworks. Hence, it may be preferable to drive spiking network models with physiologically realistic inputs rather than

  5. Electric field computation and measurements in the electroporation of inhomogeneous samples

    Science.gov (United States)

    Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta

    2017-12-01

    In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.

  6. A new method of geobiological sample storage by snap freezing under alternating magnetic field

    Science.gov (United States)

    Morono, Y.; Terada, T.; Yamamoto, Y.; Hirose, T.; Xiao, N.; Sugeno, M.; Ohwada, N.; Inagaki, F.

    2012-12-01

    Scientific ocean drilling provides unprecedented opportunities to study the deep subseafloor biosphere. Especially, subseafloor living life and its genomes are significant components, since the activity may play some roles in global biogeochemical cycling of carbon, nitrogen, sulfur, metals, and other elements over geologic times. Given the significance of deep biological components as well as the potential application of future analytical technologies to the core, the material (or portions thereof) should be preserved in the best appropriate manner for long-term storage. Here we report a novel technology to freeze the cored sample with the least damage on scientifically important multiple characteristics including microbial cells. In the conventional freezer, expanding volume of pore space by the formation of ice crystals may change the (micro-) structure in the core sample (e.g., cell, micro-fossils). The cell alive system (CAS) is the new super-quick freezing system that applies alternating magnetic field for vibrating water molecules in the samples: i.e., the vibration leads to the stable super-cooled condition of the liquid-phase water at around -7 to -10 degree-C, keeping the liquid at the low temperature uniformly. Following further decrease of temperature enables the snap and hence uniform freezing of the samples with minimal size of the ice crystal formation, resulting in the minimum damage on structurally fragile components such as microbial cells and its DNA. We tested the CAS freezing technique for sediment core samples obtained by the Chikyu training cruise 905 and others. The core samples from various depths were sub-sampled, and immediately frozen in the CAS system along with the standard freezing method under the temperature of -20, -80, and -196 (liquid nitrogen) degree-C. Microbial cell abundance showed that the normal freezing decreased the number of microbial cells, whereas the CAS freezing resulted in almost no loss of the cells. We also tested

  7. Watersheds for U.S Geological Survey National Stream Quality Accounting Network (NASQAN) sampling sites 1996-2000.

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — A digital representation of the watersheds of 43 sites on large river systems sampled by the National Stream Quality Accounting Network (NASQAN) of the U. S....

  8. Search for life on Mars in surface samples: Lessons from the 1999 Marsokhod rover field experiment

    Science.gov (United States)

    Newsom, Horton E.; Bishop, J.L.; Cockell, C.; Roush, T.L.; Johnson, J. R.

    2001-01-01

    The Marsokhod 1999 field experiment in the Mojave Desert included a simulation of a rover-based sample selection mission. As part of this mission, a test was made of strategies and analytical techniques for identifying past or present life in environments expected to be present on Mars. A combination of visual clues from high-resolution images and the detection of an important biomolecule (chlorophyll) with visible/near-infrared (NIR) spectroscopy led to the successful identification of a rock with evidence of cryptoendolithic organisms. The sample was identified in high-resolution images (3 times the resolution of the Imager for Mars Pathfinder camera) on the basis of a green tinge and textural information suggesting the presence of a thin, partially missing exfoliating layer revealing the organisms. The presence of chlorophyll bands in similar samples was observed in visible/NIR spectra of samples in the field and later confirmed in the laboratory using the same spectrometer. Raman spectroscopy in the laboratory, simulating a remote measurement technique, also detected evidence of carotenoids in samples from the same area. Laboratory analysis confirmed that the subsurface layer of the rock is inhabited by a community of coccoid Chroococcidioposis cyanobacteria. The identification of minerals in the field, including carbonates and serpentine, that are associated with aqueous processes was also demonstrated using the visible/NIR spectrometer. Other lessons learned that are applicable to future rover missions include the benefits of web-based programs for target selection and for daily mission planning and the need for involvement of the science team in optimizing image compression schemes based on the retention of visual signature characteristics. Copyright 2000 by the American Geophysical Union.

  9. Fast Road Network Extraction in Satellite Images Using Mathematical Morphology and Markov Random Fields

    Directory of Open Access Journals (Sweden)

    Géraud Thierry

    2004-01-01

    Full Text Available We present a fast method for road network extraction in satellite images. It can be seen as a transposition of the segmentation scheme "watershed transform region adjacency graph Markov random fields" to the extraction of curvilinear objects. Many road extractors which are composed of two stages can be found in the literature. The first one acts like a filter that can decide from a local analysis, at every image point, if there is a road or not. The second stage aims at obtaining the road network structure. In the method we propose to rely on a "potential" image, that is, unstructured image data that can be derived from any road extractor filter. In such a potential image, the value assigned to a point is a measure of its likelihood to be located in the middle of a road. A filtering step applied on the potential image relies on the area closing operator followed by the watershed transform to obtain a connected line which encloses the road network. Then a graph describing adjacency relationships between watershed lines is built. Defining Markov random fields upon this graph, associated with an energetic model of road networks, leads to the expression of road network extraction as a global energy minimization problem. This method can easily be adapted to other image processing fields, where the recognition of curvilinear structures is involved.

  10. Mean field approximation for biased diffusion on Japanese inter-firm trading network.

    Directory of Open Access Journals (Sweden)

    Hayafumi Watanabe

    Full Text Available By analysing the financial data of firms across Japan, a nonlinear power law with an exponent of 1.3 was observed between the number of business partners (i.e. the degree of the inter-firm trading network and sales. In a previous study using numerical simulations, we found that this scaling can be explained by both the money-transport model, where a firm (i.e. customer distributes money to its out-edges (suppliers in proportion to the in-degree of destinations, and by the correlations among the Japanese inter-firm trading network. However, in this previous study, we could not specifically identify what types of structure properties (or correlations of the network determine the 1.3 exponent. In the present study, we more clearly elucidate the relationship between this nonlinear scaling and the network structure by applying mean-field approximation of the diffusion in a complex network to this money-transport model. Using theoretical analysis, we obtained the mean-field solution of the model and found that, in the case of the Japanese firms, the scaling exponent of 1.3 can be determined from the power law of the average degree of the nearest neighbours of the network with an exponent of -0.7.

  11. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten; Yoo, Hong Il

    incentives can affect sample response rates and help one identify the effects of selection. Correcting for endogenous sample selection and panel attrition changes inferences about risk preferences in an economically and statistically significant manner. We draw mixed conclusions on temporal stability of risk......Longitudinal experiments allow one to evaluate the temporal stability of latent preferences, but raise concerns about sample selection and attrition that may confound inferences about temporal stability. We evaluate the hypothesis of temporal stability in risk preferences using a remarkable data...... set that combines socio-demographic information from the Danish Civil Registry with information on risk attitudes from a longitudinal field experiment. Our experimental design builds in explicit randomization on the incentives for participation. The results show that the use of different participation...

  12. Mean-Field Models for Heterogeneous Networks of Two-Dimensional Integrate and Fire Neurons

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2013-12-01

    Full Text Available We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential, and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons, and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presenceof heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  13. Mean-field models for heterogeneous networks of two-dimensional integrate and fire neurons.

    Science.gov (United States)

    Nicola, Wilten; Campbell, Sue Ann

    2013-01-01

    We analytically derive mean-field models for all-to-all coupled networks of heterogeneous, adapting, two-dimensional integrate and fire neurons. The class of models we consider includes the Izhikevich, adaptive exponential and quartic integrate and fire models. The heterogeneity in the parameters leads to different moment closure assumptions that can be made in the derivation of the mean-field model from the population density equation for the large network. Three different moment closure assumptions lead to three different mean-field systems. These systems can be used for distinct purposes such as bifurcation analysis of the large networks, prediction of steady state firing rate distributions, parameter estimation for actual neurons and faster exploration of the parameter space. We use the mean-field systems to analyze adaptation induced bursting under realistic sources of heterogeneity in multiple parameters. Our analysis demonstrates that the presence of heterogeneity causes the Hopf bifurcation associated with the emergence of bursting to change from sub-critical to super-critical. This is confirmed with numerical simulations of the full network for biologically reasonable parameter values. This change decreases the plausibility of adaptation being the cause of bursting in hippocampal area CA3, an area with a sizable population of heavily coupled, strongly adapting neurons.

  14. High Performance Ambipolar Field-Effect Transistor of Random Network Carbon Nanotubes

    NARCIS (Netherlands)

    Bisri, Satria Zulkarnaen; Gao, Jia; Derenskyi, Vladimir; Gomulya, Widianta; Iezhokin, Igor; Gordiichuk, Pavlo; Herrmann, Andreas; Loi, Maria Antonietta

    2012-01-01

    Ambipolar field-effect transistors of random network carbon nanotubes are fabricated from an enriched dispersion utilizing a conjugated polymer as the selective purifying medium. The devices exhibit high mobility values for both holes and electrons (3 cm(2)/V.s) with a high on/off ratio (10(6)). The

  15. Carbon Nanotube Network Ambipolar Field-Effect Transistors with 108 On/Off Ratio

    NARCIS (Netherlands)

    Derenskyi, Vladimir; Gomulya, Widianta; Salazar Rios, Jorge Mario; Fritsch, Martin; Fröhlich, Nils; Jung, Stefan; Allard, Sybille; Bisri, Satria Zulkarnaen; Gordiichuk, Pavlo; Herrmann, Andreas; Scherf, Ullrich; Loi, Maria Antonietta

    2017-01-01

    Polymer wrapping is a highly effective method of selecting semiconducting carbon nanotubes and dispersing them in solution. Semi-aligned semiconducting carbon nanotube networks are obtained by blade coating, an effective and scalable process. The field-effect transistor (FET) performance can be

  16. Quantum perceptron over a field and neural network architecture selection in a quantum computer.

    Science.gov (United States)

    da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa

    2016-04-01

    In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. How Does Sampling Methodology Influence Molecular Detection and Isolation Success in Influenza A Virus Field Studies?

    Science.gov (United States)

    Latorre-Margalef, Neus; Avril, Alexis; Tolf, Conny; Olsen, Björn; Waldenström, Jonas

    2015-12-11

    Wild waterfowl are important reservoir hosts for influenza A virus (IAV) and a potential source of spillover infections in other hosts, including poultry and swine. The emergence of highly pathogenic avian influenza (HPAI) viruses, such as H5N1 and H5N8, and subsequent spread along migratory flyways prompted the initiation of several programs in Europe, North America, and Africa to monitor circulation of HPAI and low-pathogenicity precursor viruses (low-pathogenicity avian influenza [LPAI] viruses). Given the costs of maintaining such programs, it is essential to establish best practice for field methodologies to provide robust data for epidemiological interpretation. Here, we use long-term surveillance data from a single site to evaluate the influence of a number of parameters on virus detection and isolation of LPAI viruses. A total of 26,586 samples (oropharyngeal, fecal, and cloacal) collected from wild mallards were screened by real-time PCR, and positive samples were subjected to isolation in embryonated chicken eggs. The LPAI virus detection rate was influenced by the sample type: cloacal/fecal samples showed a consistently higher detection rate and lower cycle threshold (Ct) value than oropharyngeal samples. Molecular detection was more sensitive than isolation, and virus isolation success was proportional to the number of RNA copies in the sample. Interestingly, for a given Ct value, the isolation success was lower in samples from adult birds than in those from juveniles. Comparing the results of specific real-time reverse transcriptase (RRT)-PCRs and of isolation, it was clear that coinfections were common in the investigated birds. The effects of sample type and detection methods warrant some caution in interpretation of the surveillance data. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  18. Update on ESTCP Project ER-0918: Field Sampling and Sample Processing for Metals on DoD Ranges

    Science.gov (United States)

    2011-03-30

    recovery of antimony is evident with conventional analysis; new digestion process needed 22 ...applicable to both metals and energetics Experimental Design –Task 1 ● Multi-increment versus grab samples ● Number of increments per decision unit...for digestate preparation 4 Experimental Design –Task 1 5 Single DU Grab Sample FP MI Sample Berm Sample Type 5 10 20 30 50 100 50 50 Pb < 400

  19. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  20. Detection of Campylobacter in human and animal field samples in Cambodia.

    Science.gov (United States)

    Osbjer, Kristina; Tano, Eva; Chhayheng, Leang; Mac-Kwashie, Akofa Olivia; Fernström, Lise-Lotte; Ellström, Patrik; Sokerya, Seng; Sokheng, Choup; Mom, Veng; Chheng, Kannarath; San, Sorn; Davun, Holl; Boqvist, Sofia; Rautelin, Hilpi; Magnusson, Ulf

    2016-06-01

    Campylobacter are zoonotic bacteria and a leading cause of human gastroenteritis worldwide with Campylobacter jejuni and C. coli being the most commonly detected species. The aim of this study was to detect Campylobacter in humans and livestock (chickens, ducks, pigs, cattle, water buffalo, quail, pigeons and geese) in rural households by routine culturing and multiplex PCR in faecal samples frozen before analysis. Of 681 human samples, 82 (12%) tested positive by PCR (C. jejuni in 66 samples and C. coli in 16), but none by routine culture. Children were more commonly Campylobacter positive (19%) than adult males (8%) and females (7%). Of 853 livestock samples, 106 (12%) tested positive by routine culture and 352 (41%) by PCR. Campylobacter jejuni was more frequent in chickens and ducks and C. coli in pigs. In conclusion, Campylobacter proved to be highly prevalent by PCR in children (19%), ducks (24%), chickens (56%) and pigs (72%). Routine culturing was insufficiently sensitive in detecting Campylobacter in field samples frozen before analysis. These findings suggest that PCR should be the preferred diagnostic method for detection of Campylobacter in humans and livestock where timely culture is not feasible. © 2016 The Authors. APMIS published by John Wiley & Sons Ltd on behalf of Scandinavian Societies for Medical Microbiology and Pathology.

  1. Confirmatory analysis of field-presumptive GSR test sample using SEM/EDS

    Science.gov (United States)

    Toal, Sarah J.; Niemeyer, Wayne D.; Conte, Sean; Montgomery, Daniel D.; Erikson, Gregory S.

    2014-09-01

    RedXDefense has developed an automated red-light/green-light field presumptive lead test using a sampling pad which can be subsequently processed in a Scanning Electron Microscope for GSR confirmation. The XCAT's sampling card is used to acquire a sample from a suspect's hands on the scene and give investigators an immediate presumptive as to the presence of lead possibly from primer residue. Positive results can be obtained after firing as little as one shot. The same sampling card can then be sent to a crime lab and processed on the SEM for GSR following ASTM E-1588-10 Standard Guide for Gunshot Residue Analysis by Scanning Electron Microscopy/Energy Dispersive X-Ray Spectrometry, in the same manner as the existing tape lifts currently used in the field. Detection of GSR-characteristic particles (fused lead, barium, and antimony) as small as 0.8 microns (0.5 micron resolution) has been achieved using a JEOL JSM-6480LV SEM equipped with an Oxford Instruments INCA EDS system with a 50mm2 SDD detector, 350X magnification, in low-vacuum mode and in high vacuum mode after coating with carbon in a sputter coater. GSR particles remain stable on the sampling pad for a minimum of two months after chemical exposure (long term stability tests are in progress). The presumptive result provided by the XCAT yields immediate actionable intelligence to law enforcement to facilitate their investigation, without compromising the confirmatory test necessary to further support the investigation and legal case.

  2. Deoxynivalenol, zearalenone, and Fusarium graminearum contamination of cereal straw; field distribution; and sampling of big bales.

    Science.gov (United States)

    Häggblom, P; Nordkvist, E

    2015-05-01

    Sampling of straw bales from wheat, barley, and oats was carried out after harvest showing large variations in deoxynivalenol (DON) and zearalenone (ZEN) levels. In the wheat field, DON was detected in all straw samples with an average DON concentration of 976 μg/kg and a median of 525 μg/kg, while in four bales, the concentrations were above 3000 μg/kg. For ZEN, the concentrations were more uniform with an average concentration of 11 μg/kg. The barley straw bales were all positive for DON with an average concentration of 449 μg/kg and three bales above 800 μg/kg. In oat straw, the average DON concentration was 6719 μg/kg with the lowest concentration at 2614 μg/kg and eight samples above 8000 μg/kg. ZEN contamination was detected in all bales with an average concentration of 53 μg/kg with the highest concentration at 219 μg/kg. Oat bales from another field showed an average concentration of 16,382 μg/kg. ZEN concentrations in the oat bales were on average 153 μg/kg with a maximum at 284 μg/kg. Levels of Fusarium graminearum DNA were higher in oat straw (max 6444 pg DNA/mg straw) compared to straw from wheat or barley. The significance of mycotoxin exposure from straw should not be neglected particularly in years when high levels of DON and ZEN are also detected in the feed grain. With a limited number of samples preferably using a sampling probe, it is possible to distinguish lots of straw that should not be used as bedding material for pigs.

  3. Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.

    Directory of Open Access Journals (Sweden)

    Alberto Mazzoni

    2015-12-01

    Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  4. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models.

    Science.gov (United States)

    Mazzoni, Alberto; Lindén, Henrik; Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T

    2015-12-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  5. View-interpolation of sparsely sampled sinogram using convolutional neural network

    Science.gov (United States)

    Lee, Hoyeon; Lee, Jongha; Cho, Suengryong

    2017-02-01

    Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

  6. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures

    Directory of Open Access Journals (Sweden)

    Daniel Carl Miner

    2014-11-01

    Full Text Available The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  7. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures.

    Science.gov (United States)

    Miner, Daniel C; Triesch, Jochen

    2014-01-01

    The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  8. Network Neurodegeneration in Alzheimer’s Disease via MRI based Shape Diffeomorphometry and High Field Atlasing

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2015-05-01

    Full Text Available This paper examines MRI analysis of neurodegeneration in Alzheimer’s Disease (AD in a network of structures within the medial temporal lobe using diffeomorphometry methods coupled with high-field atlasing in which the entorhinal cortex is partitioned into nine subareas. The morphometry markers for three groups of subjects (controls, preclinical AD and symptomatic AD are indexed to template coordinates measured with respect to these nine subareas. The location and timing of changes are examined within the subareas as it pertains to the classic Braak and Braak staging by comparing the three groups. We demonstrate that the earliest preclinical changes in the population occur in the lateral most sulcal extent in the entorhinal cortex (alluded to as trans entorhinal cortex by Braak and Braak, and then proceeds medially which is consistent with the Braak and Braak staging. We use high field 11T atlasing to demonstrate that the network changes are occurring at the junctures of the substructures in this medial temporal lobe network. Temporal progression of the disease through the network is also examined via changepoint analysis demonstrating earliest changes in entorhinal cortex. The differential expression of rate of atrophy with progression signaling the changepoint time across the network is demonstrated to be signaling in the intermediate caudal subarea of the entorhinal cortex, which has been noted to be proximal to the hippocampus. This coupled to the findings of the nearby basolateral involvement in amygdala demonstrates the selectivity of neurodegeneration in early AD.

  9. Inferring gene regulatory networks by singular value decomposition and gravitation field algorithm.

    Science.gov (United States)

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms.

  10. A spatially distributed isotope sampling network in a snow-dominated catchment for the quantification of snow meltwater

    Science.gov (United States)

    Rücker, Andrea; Boss, Stefan; Von Freyberg, Jana; Zappa, Massimiliano; Kirchner, James

    2017-04-01

    In mountainous catchments with seasonal snowpacks, river discharge in downstream valleys is largely sustained by snowmelt in spring and summer. Future climate warming will likely reduce snow volumes and lead to earlier and faster snowmelt in such catchments. This, in turn, may increase the risk of summer low flows and hydrological droughts. Improved runoff predictions are thus required in order to adapt water management to future climatic conditions and to assure the availability of fresh water throughout the year. However, a detailed understanding of the hydrological processes is crucial to obtain robust predictions of river streamflow. This in turn requires fingerprinting source areas of streamflow, tracing water flow pathways, and measuring timescales of catchment storage, using tracers such as stable water isotopes (18O, 2H). For this reason, we have established an isotope sampling network in the Alptal, a snowmelt-dominated catchment (46.4 km2) in Central-Switzerland, as part of the SREP-Drought project (Snow Resources and the Early Prediction of hydrological DROUGHT in mountainous streams). Precipitation and snow cores are analyzed for their isotopic signature at daily or weekly intervals. Three-week bulk samples of precipitation are also collected on a transect along the Alptal valley bottom, and along an elevational transect perpendicular to the Alptal valley axis. Streamwater samples are taken at the catchment outlet as well as in two small nested sub-catchments (< 2 km2). In order to catch the isotopic signature of naturally-occurring snowmelt, a fully automatic snow lysimeter system was developed, which also facilitates real-time monitoring of snowmelt events, system status and environmental conditions (air and soil temperature). Three lysimeter systems were installed within the catchment, in one forested site and two open field sites at different elevations, and have been operational since November 2016. We will present the isotope time series from our

  11. Imaging samples larger than the field of view: the SLS experience

    Science.gov (United States)

    Vogiatzis Oikonomidis, Ioannis; Lovric, Goran; Cremona, Tiziana P.; Arcadu, Filippo; Patera, Alessandra; Schittny, Johannes C.; Stampanoni, Marco

    2017-06-01

    Volumetric datasets with micrometer spatial and sub-second temporal resolutions are nowadays routinely acquired using synchrotron X-ray tomographic microscopy (SRXTM). Although SRXTM technology allows the examination of multiple samples with short scan times, many specimens are larger than the field-of-view (FOV) provided by the detector. The extension of the FOV in the direction perpendicular to the rotation axis remains non-trivial. We present a method that can efficiently increase the FOV merging volumetric datasets obtained by region-of-interest tomographies in different 3D positions of the sample with a minimal amount of artefacts and with the ability to handle large amounts of data. The method has been successfully applied for the three-dimensional imaging of a small number of mouse lung acini of intact animals, where pixel sizes down to the micrometer range and short exposure times are required.

  12. Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step

    Science.gov (United States)

    Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.

    1990-01-01

    The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.

  13. Correlated Spatio-Temporal Data Collection in Wireless Sensor Networks Based on Low Rank Matrix Approximation and Optimized Node Sampling

    Directory of Open Access Journals (Sweden)

    Xinglin Piao

    2014-12-01

    Full Text Available The emerging low rank matrix approximation (LRMA method provides an energy efficient scheme for data collection in wireless sensor networks (WSNs by randomly sampling a subset of sensor nodes for data sensing. However, the existing LRMA based methods generally underutilize the spatial or temporal correlation of the sensing data, resulting in uneven energy consumption and thus shortening the network lifetime. In this paper, we propose a correlated spatio-temporal data collection method for WSNs based on LRMA. In the proposed method, both the temporal consistence and the spatial correlation of the sensing data are simultaneously integrated under a new LRMA model. Moreover, the network energy consumption issue is considered in the node sampling procedure. We use Gini index to measure both the spatial distribution of the selected nodes and the evenness of the network energy status, then formulate and resolve an optimization problem to achieve optimized node sampling. The proposed method is evaluated on both the simulated and real wireless networks and compared with state-of-the-art methods. The experimental results show the proposed method efficiently reduces the energy consumption of network and prolongs the network lifetime with high data recovery accuracy and good stability.

  14. Replicability and generalizability of PTSD networks: A cross-cultural multisite study of PTSD symptoms in four trauma patient samples

    DEFF Research Database (Denmark)

    Fried, Eiko I.; Eidhof, Marloes B.; Palic, Sabina

    2018-01-01

    The growing literature conceptualizing mental disorders like Posttraumatic Stress Disorder (PTSD) asnetworks of interacting symptoms faces three key challenges. Prior studies predominantly used (a)small samples with low power for precise network estimation, (b) non-clinical samples, and (c...

  15. Field-amplified online sample stacking capillary electrophoresis UV detection for plasma malondialdehyde measurement.

    Science.gov (United States)

    Zinellu, Angelo; Sotgia, Salvatore; Deiana, Luca; Carru, Ciriaco

    2011-07-01

    Malondialdehyde (MDA) determination is the most widely used method for monitoring lipid peroxidation. Here, we describe an easy field-amplified sample injection (FASI) CE method with UV detection for the detection of free plasma MDA. MDA was detected within 8 min by using 200 mmol/L Tris phosphate pH 5.0 as running buffer. Plasma samples treated with ACN for protein elimination were directly injected on capillary without complex cleanup and/or sample derivatization procedures. Using electrokinetic injection, the detection limit in real sample was 3 nmol/L, thus improving of about 100-fold the LOD of the previous described methods based on CE. Precision tests indicate a good repeatability of our method both for migration times (CV = 1.11%) and for areas (CV = 2.05%). Moreover, a good reproducibility of intra- and inter-assay tests was obtained (CV = 2.55% and CV = 5.14%, respectively). Suitability of the method was tested by measuring MDA levels in 44 healthy volunteers. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Radiofrequency Field Distribution Assessment in Indoor Areas Covered by Wireless Local Area Network

    Directory of Open Access Journals (Sweden)

    HELBET, R.

    2009-02-01

    Full Text Available Electromagnetic environment becomes day by day more congested. Radio communication systems in the short range are now part of everyday life, and there is a need to also assess the pollution level due to their emission if we take into account human health and protection. There is consistent scientific evidence that environmental electromagnetic field may cause undesirable biological effects or even health hazards. Present paper aims at giving a view on exposure level due to wireless local area networks (WLAN emission solely, as part of environmental radiofrequency pollution. Highly accurate measurements were made indoor by using a frequency-selective measurement system and identifying the correct settings for an error-minimum assessment. We focused on analysis of the electric flux density distribution inside a room, in the far field of the emitting antennas, in case of a single network communication channel. We analyze the influence the network configuration parameters have on the field level. Distance from the source and traffic rate are also important parameters that affect the exposure level. Our measurements indicate that in the immediate vicinity of the WLAN stations the average field may reach as much as 13% from the present accepted reference levels given in the human exposure standards.

  17. Mean field dynamics of networks of delay-coupled noisy excitable units

    Energy Technology Data Exchange (ETDEWEB)

    Franović, Igor, E-mail: franovic@ipb.ac.rs [Scientific Computing Laboratory, Institute of Physics Belgrade, University of Belgrade, Pregrevica 118, 11080 Belgrade (Serbia); Todorović, Kristina; Burić, Nikola [Department of Physics and Mathematics, Faculty of Pharmacy, University of Belgrade, Vojvode Stepe 450, Belgrade (Serbia); Vasović, Nebojša [Department of Applied Mathematics, Faculty of Mining and Geology, University of Belgrade, PO Box 162, Belgrade (Serbia)

    2016-06-08

    We use the mean-field approach to analyze the collective dynamics in macroscopic networks of stochastic Fitzhugh-Nagumo units with delayed couplings. The conditions for validity of the two main approximations behind the model, called the Gaussian approximation and the Quasi-independence approximation, are examined. It is shown that the dynamics of the mean-field model may indicate in a self-consistent fashion the parameter domains where the Quasi-independence approximation fails. Apart from a network of globally coupled units, we also consider the paradigmatic setup of two interacting assemblies to demonstrate how our framework may be extended to hierarchical and modular networks. In both cases, the mean-field model can be used to qualitatively analyze the stability of the system, as well as the scenarios for the onset and the suppression of the collective mode. In quantitative terms, the mean-field model is capable of predicting the average oscillation frequency corresponding to the global variables of the exact system.

  18. Determination of hexaconazole in field samples of an oil palm plantation.

    Science.gov (United States)

    Muhamad, Halimah; Zainol, Maznah; Sahid, Ismail; Abu Seman, Idris

    2012-08-01

    In oil palm plantations, the fungicide hexaconazole is used to control Ganoderma infection that threatens to destroy or compromisethe palm. The application of hexaconazole is usually through soil drenching, trunk injection, or a combination of these two methods. It is therefore important to have a method to determine the residual amount of hexaconazole in the field such as in samples of water, soil, and leaf to monitor the use and fate of the fungicide in oil palm plantations. This study on the behaviour of hexaconazole in oil palm agro-environment was carried out at the UKM-MPOB Research Station, Bangi Lama, Selangor. Three experimental plots in this estate with 7-year-old Dura x Pisifera (DxP) palms were selected for the field trial. One plot was sprayed with hexaconazole at the manufacturer's recommended dosage, one at double the recommended dosage, and the third plot was untreated control. Hexaconazole residues in the soil, leaf, and water were determined before and after fungicide treatment. Soil samples were randomly collected from three locations at different depths (0-50 cm) and soil collected fromthe same depth were bulked together. Soil, water, and palm leaf were collected at -1 (day before treatment), 0 (day of treatment), 1, 3, 7, 14, 21, 70, 90, and 120 days after treatment. Hexaconazole was detected in soil and oil palm leaf, but was not detected in water from the nearby stream. © 2012 John Wiley & Sons, Ltd.

  19. Sign determination of dipolar couplings in field-oriented bicelles by variable angle sample spinning (VASS)

    Energy Technology Data Exchange (ETDEWEB)

    Tian, F.; Losonczi, J.A.; Fischer, M.W.F.; Prestegard, J.H. [University of Georgia, Complex Carbohydrate Research Center (United States)

    1999-10-15

    Residual dipolar couplings are being increasingly used as structural constraints for NMR studies of biomolecules. A problem arises when dipolar coupling contributions are larger than scalar contributions for a given spin pair, as is commonly observed in solid state NMR studies, in that signs of dipolar couplings cannot easily be determined. Here the sign ambiguities of dipolar couplings in field-oriented bicelles are resolved by variable angle sample spinning (VASS) techniques. The director behavior of field-oriented bicelles (DMPC/DHPC, DMPC/CHAPSO) in VASS is studied by {sup 31}P NMR. A stable configuration occurs when the spinning angle is smaller than the magic angle, 54.7 deg., and the director (or bicelle normal) of the disks is mainly distributed in a plane perpendicular to the rotation axis. Since the dipolar couplings depend on how the bicelles are oriented with respect to the magnetic field, it is shown that the dipolar interaction can be scaled to the same order as the J-coupling by moving the spinning axis from 0 deg. toward 54.7 deg. Thus the relative sign of dipolar and scalar couplings can be determined.

  20. Integrating field sampling, geostatistics and remote sensing to map wetland vegetation in the Pantanal, Brazil

    Science.gov (United States)

    Arieira, J.; Karssenberg, D.; de Jong, S. M.; Addink, E. A.; Couto, E. G.; Nunes da Cunha, C.; Skøien, J. O.

    2011-03-01

    Development of efficient methodologies for mapping wetland vegetation is of key importance to wetland conservation. Here we propose the integration of a number of statistical techniques, in particular cluster analysis, universal kriging and error propagation modelling, to integrate observations from remote sensing and field sampling for mapping vegetation communities and estimating uncertainty. The approach results in seven vegetation communities with a known floral composition that can be mapped over large areas using remotely sensed data. The relationship between remotely sensed data and vegetation patterns, captured in four factorial axes, were described using multiple linear regression models. There were then used in a universal kriging procedure to reduce the mapping uncertainty. Cross-validation procedures and Monte Carlo simulations were used to quantify the uncertainty in the resulting map. Cross-validation showed that accuracy in classification varies according with the community type, as a result of sampling density and configuration. A map of uncertainty derived from Monte Carlo simulations revealed significant spatial variation in classification, but this had little impact on the proportion and arrangement of the communities observed. These results suggested that mapping improvement could be achieved by increasing the number of field observations of those communities with a scattered and small patch size distribution; or by including a larger number of digital images as explanatory variables in the model. Comparison of the resulting plant community map with a flood duration map, revealed that flooding duration is an important driver of vegetation zonation. This mapping approach is able to integrate field point data and high-resolution remote-sensing images, providing a new basis to map wetland vegetation and allow its future application in habitat management, conservation assessment and long-term ecological monitoring in wetland landscapes.

  1. Chemical Transformations Approaching Chemical Accuracy via Correlated Sampling in Auxiliary-Field Quantum Monte Carlo.

    Science.gov (United States)

    Shee, James; Zhang, Shiwei; Reichman, David R; Friesner, Richard A

    2017-06-13

    The exact and phaseless variants of auxiliary-field quantum Monte Carlo (AFQMC) have been shown to be capable of producing accurate ground-state energies for a wide variety of systems including those which exhibit substantial electron correlation effects. The computational cost of performing these calculations has to date been relatively high, impeding many important applications of these approaches. Here we present a correlated sampling methodology for AFQMC which relies on error cancellation to dramatically accelerate the calculation of energy differences of relevance to chemical transformations. In particular, we show that our correlated sampling-based AFQMC approach is capable of calculating redox properties, deprotonation free energies, and hydrogen abstraction energies in an efficient manner without sacrificing accuracy. We validate the computational protocol by calculating the ionization potentials and electron affinities of the atoms contained in the G2 test set and then proceed to utilize a composite method, which treats fixed-geometry processes with correlated sampling-based AFQMC and relaxation energies via MP2, to compute the ionization potential, deprotonation free energy, and the O-H bond disocciation energy of methanol, all to within chemical accuracy. We show that the efficiency of correlated sampling relative to uncorrelated calculations increases with system and basis set size and that correlated sampling greatly reduces the required number of random walkers to achieve a target statistical error. This translates to CPU-time speed-up factors of 55, 25, and 24 for the ionization potential of the K atom, the deprotonation of methanol, and hydrogen abstraction from the O-H bond of methanol, respectively. We conclude with a discussion of further efficiency improvements that may open the door to the accurate description of chemical processes in complex systems.

  2. Noisy mean field game model for malware propagation in opportunistic networks

    KAUST Repository

    Tembine, Hamidou

    2012-01-01

    In this paper we present analytical mean field techniques that can be used to better understand the behavior of malware propagation in opportunistic large networks. We develop a modeling methodology based on stochastic mean field optimal control that is able to capture many aspects of the problem, especially the impact of the control and heterogeneity of the system on the spreading characteristics of malware. The stochastic large process characterizing the evolution of the total number of infected nodes is examined with a noisy mean field limit and compared to a deterministic one. The stochastic nature of the wireless environment make stochastic approaches more realistic for such types of networks. By introducing control strategies, we show that the fraction of infected nodes can be maintained below some threshold. In contrast to most of the existing results on mean field propagation models which focus on deterministic equations, we show that the mean field limit is stochastic if the second moment of the number of object transitions per time slot is unbounded with the size of the system. This allows us to compare one path of the fraction of infected nodes with the stochastic trajectory of its mean field limit. In order to take into account the heterogeneity of opportunistic networks, the analysis is extended to multiple types of nodes. Our numerical results show that the heterogeneity can help to stabilize the system. We verify the results through simulation showing how to obtain useful approximations in the case of very large systems. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.

  3. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    CERN Document Server

    Sochi, Taha

    2014-01-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton, and Global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of Computational Fluid Dynamics for solving the flow fields in tubes and networks for various types of Newtoni...

  4. Social Network Type and Subjective Well-Being in a National Sample of Older Americans

    Science.gov (United States)

    Litwin, Howard; Shiovitz-Ezra, Sharon

    2011-01-01

    Purpose: The study considers the social networks of older Americans, a population for whom there have been few studies of social network type. It also examines associations between network types and well-being indicators: loneliness, anxiety, and happiness. Design and Methods: A subsample of persons aged 65 years and older from the first wave of…

  5. Focussed ion beam thin sample microanalysis using a field emission gun electron probe microanalyser

    Science.gov (United States)

    Kubo, Y.

    2018-01-01

    Field emission gun electron probe microanalysis (FEG-EPMA) in conjunction with wavelength-dispersive X-ray spectrometry using a low acceleration voltage (V acc) allows elemental analysis with sub-micrometre lateral spatial resolution (SR). However, this degree of SR does not necessarily meet the requirements associated with increasingly miniaturised devices. Another challenge related to performing FEG-EPMA with a low V acc is that the accuracy of quantitative analyses is adversely affected, primarily because low energy X-ray lines such as the L- and M-lines must be employed and due to the potential of line interference. One promising means of obtaining high SR with FEG-EPMA is to use thin samples together with high V acc values. This mini-review covers the basic principles of thin-sample FEG-EPMA and describes an application of this technique to the analysis of optical fibres. Outstanding issues related to this technique that must be addressed are also discussed, which include the potential for electron beam damage during analysis of insulating materials and the development of methods to use thin samples for quantitative analysis.

  6. Vesicular exanthema of swine virus: isolation and serotyping of field samples.

    Science.gov (United States)

    Edwards, J F; Yedloutschnig, R J; Dardiri, A H; Callis, J J

    1987-01-01

    Virus isolation was attempted from 262 field samples of vesicular material collected during the outbreaks of vesicular exanthema of swine in the U.S.A. from 1952-54. Using primary swine kidney culture, viral cytopathogenic agents were isolated from 76.3% of the samples. However, an overall recovery rate of 82.1% was obtained after samples negative in tissue culture were inoculated intradermally in susceptible swine. All vesicular exanthema of swine virus isolates were identified as serotype B51 using complement fixation and serum neutralization tests. Two isolates did not react with antisera to known vesicular agents of swine and failed to produce vesicles or clinical signs of disease upon inoculation in swine. One vesicular exanthema of swine virus isolate from tissue of equine origin was pathogenic for swine but produced limited vesiculation at the site of intradermalingual inoculation in the tongue of a pony infected experimentally. Type B51 virus was reisolated from lesions produced in the pony and the pony became seropositive for virus type B51. PMID:3651889

  7. A stochastic-field description of finite-size spiking neural networks.

    Science.gov (United States)

    Dumont, Grégory; Payeur, Alexandre; Longtin, André

    2017-08-01

    Neural network dynamics are governed by the interaction of spiking neurons. Stochastic aspects of single-neuron dynamics propagate up to the network level and shape the dynamical and informational properties of the population. Mean-field models of population activity disregard the finite-size stochastic fluctuations of network dynamics and thus offer a deterministic description of the system. Here, we derive a stochastic partial differential equation (SPDE) describing the temporal evolution of the finite-size refractory density, which represents the proportion of neurons in a given refractory state at any given time. The population activity-the density of active neurons per unit time-is easily extracted from this refractory density. The SPDE includes finite-size effects through a two-dimensional Gaussian white noise that acts both in time and along the refractory dimension. For an infinite number of neurons the standard mean-field theory is recovered. A discretization of the SPDE along its characteristic curves allows direct simulations of the activity of large but finite spiking networks; this constitutes the main advantage of our approach. Linearizing the SPDE with respect to the deterministic asynchronous state allows the theoretical investigation of finite-size activity fluctuations. In particular, analytical expressions for the power spectrum and autocorrelation of activity fluctuations are obtained. Moreover, our approach can be adapted to incorporate multiple interacting populations and quasi-renewal single-neuron dynamics.

  8. Near-field antenna testing using the Hewlett Packard 8510 automated network analyzer

    Science.gov (United States)

    Kunath, Richard R.; Garrett, Michael J.

    1990-01-01

    Near-field antenna measurements were made using a Hewlett-Packard 8510 automated network analyzer. This system features measurement sensitivity better than -90 dBm, at measurement speeds of one data point per millisecond in the fast data acquisition mode. The system was configured using external, even harmonic mixers and a fiber optic distributed local oscillator signal. Additionally, the time domain capability of the HP8510, made it possible to generate far-field diagnostic results immediately after data acquisition without the use of an external computer.

  9. Utilizing neural networks in magnetic media modeling and field computation: A review

    OpenAIRE

    Amr A. Adly; Abd-El-Hafiz, Salwa K.

    2013-01-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in ma...

  10. Synchronization of Hierarchical Time-Varying Neural Networks Based on Asynchronous and Intermittent Sampled-Data Control.

    Science.gov (United States)

    Xiong, Wenjun; Patel, Ragini; Cao, Jinde; Zheng, Wei Xing

    In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.In this brief, our purpose is to apply asynchronous and intermittent sampled-data control methods to achieve the synchronization of hierarchical time-varying neural networks. The asynchronous and intermittent sampled-data controllers are proposed for two reasons: 1) the controllers may not transmit the control information simultaneously and 2) the controllers cannot always exist at any time . The synchronization is then discussed for a kind of hierarchical time-varying neural networks based on the asynchronous and intermittent sampled-data controllers. Finally, the simulation results are given to illustrate the usefulness of the developed criteria.

  11. Modeling multiple time scale firing rate adaptation in a neural network of local field potentials.

    Science.gov (United States)

    Lundstrom, Brian Nils

    2015-02-01

    In response to stimulus changes, the firing rates of many neurons adapt, such that stimulus change is emphasized. Previous work has emphasized that rate adaptation can span a wide range of time scales and produce time scale invariant power law adaptation. However, neuronal rate adaptation is typically modeled using single time scale dynamics, and constructing a conductance-based model with arbitrary adaptation dynamics is nontrivial. Here, a modeling approach is developed in which firing rate adaptation, or spike frequency adaptation, can be understood as a filtering of slow stimulus statistics. Adaptation dynamics are modeled by a stimulus filter, and quantified by measuring the phase leads of the firing rate in response to varying input frequencies. Arbitrary adaptation dynamics are approximated by a set of weighted exponentials with parameters obtained by fitting to a desired filter. With this approach it is straightforward to assess the effect of multiple time scale adaptation dynamics on neural networks. To demonstrate this, single time scale and power law adaptation were added to a network model of local field potentials. Rate adaptation enhanced the slow oscillations of the network and flattened the output power spectrum, dampening intrinsic network frequencies. Thus, rate adaptation may play an important role in network dynamics.

  12. The interseismic velocity field of the central Apennines from a dense GPS network

    Directory of Open Access Journals (Sweden)

    Alessandro Galvani

    2013-02-01

    Full Text Available Since 1999, we have repeatedly surveyed the central Apennines through a dense survey-style geodetic network, the Central Apennines Geodetic Network (CAGeoNet. CAGeoNet consists of 123 benchmarks distributed over an area of ca. 180 km × 130 km, from the Tyrrhenian coast to the Adriatic coast, with an average inter-site distance of 3 km to 5 km. The network is positioned across the main seismogenic structures of the region that are capable of generating destructive earthquakes. Here, we show the horizontal GPS velocity field of both CAGeoNet and continuous GPS stations in this region, as estimated from the position–time series in the time span from 1999 to 2007. We analyzed the data using both the Bernese and GAMIT software, rigorously combining the two solutions to obtain a validated result. Then, we analyzed the strain-rate field, which shows a region of extension along the axis of the Apennine chain, with values from 2 × 10–9 yr–1 to 66·× 10–9 yr–1, and a relative minimum of ca. 20 × 10–9 yr–1 located in the L'Aquila basin area. Our velocity field represents an improved estimation of the ongoing elastic interseismic deformation of the central Apennines, and in particular relating to the area of the L'Aquila earthquake of April 6, 2009.

  13. The complex interplay of social networks, geography and HIV risk among Malaysian Drug Injectors: Results from respondent-driven sampling.

    Science.gov (United States)

    Zelenev, Alexei; Long, Elisa; Bazazi, Alexander R; Kamarulzaman, Adeeba; Altice, Frederick L

    2016-11-01

    HIV is primarily concentrated among people who inject drugs (PWID) in Malaysia, where currently HIV prevention and treatment coverage is inadequate. To improve the targeting of interventions, we examined HIV clustering and the role that social networks and geographical distance play in influencing HIV transmission among PWID. Data were derived from a respondent-driven survey sample (RDS) collected during 2010 of 460 PWID in greater Kuala Lumpur. Analysis focused on socio-demographic, clinical, behavioural, and network information. Spatial probit models were developed based on a distinction between the influence of peers (individuals nominated through a recruitment network) and neighbours (residing a close distance to the individual). The models were expanded to account for the potential influence of the network formation. Recruitment patterns of HIV-infected PWID clustered both spatially and across the recruitment networks. In addition, HIV-infected PWID were more likely to have peers and neighbours who inject with clean needles were HIV-infected and lived nearby (network formation and sero-sorting. The relationship between HIV status across networks and space in Kuala Lumpur underscores the importance of these factors for surveillance and prevention strategies, and this needs to be more closely integrated. RDS can be applied to identify injection network structures, and this provides an important mechanism for improving public health surveillance, accessing high-risk populations, and implementing risk-reduction interventions to slow HIV transmission. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Comparison of aerosol backscatter and wind field estimates from the REAL and the SAMPLE

    Science.gov (United States)

    Mayor, Shane D.; Dérian, Pierre; Mauzey, Christopher F.; Spuler, Scott M.; Ponsardin, Patrick; Pruitt, Jeff; Ramsey, Darrell; Higdon, Noah S.

    2015-09-01

    Although operating at the same near-infrared 1.5- m wavelength, the Raman-shifted Eye-safe Aerosol Lidar (REAL) and the Scanning Aerosol Micro-Pulse Lidar-Eye-safe (SAMPLE) are very different in how they generate and detect laser radiation. We present results from an experiment where the REAL and the SAMPLE were operated side-by-side in Chico, California, in March of 2015. During the non-continuous, eleven day test period, the SAMPLE instrument was operated at maximum pulse repetition frequency (15 kHz) and integrated over the interpulse period of the REAL (0.1 s). Operation at the high pulse repetition frequency resulted in second trip echoes which contaminated portions of the data. The performance of the SAMPLE instrument varied with background brightness--as expected with a photon counting receiver|--yet showed equal or larger backscatter intensity signal to noise ratio throughout the intercomparison experiment. We show that a modest low-pass filter or smooth applied to the REAL raw waveforms (that have 5x higher range resolution) results in significant increases in raw signal-to-noise ratio and image signal-to-noise ratio--a measure of coherent aerosol feature content in the images resulting from the scans. Examples of wind fields and time series of wind estimates from both systems are presented. We conclude by reviewing the advantages and disadvantages of each system and sketch a plan for future research and development activities to optimize the design of future systems.

  15. A Quantitative Approach for Collocating NEON's Sensor-Based Ecological Measurements and in-situ Field Sampling and Observations

    Science.gov (United States)

    Zulueta, R. C.; Metzger, S.; Ayres, E.; Luo, H.; Meier, C. L.; Barnett, D.; Sanclements, M.; Elmendorf, S.

    2013-12-01

    The National Ecological Observatory Network (NEON) is a continental-scale research platform currently in development to assess the causes of ecological change and biological responses to change across a projected 30-year timeframe. A suite of standardized sensor-based measurements (i.e., Terrestrial Instrument System (TIS) measurements) and in-situ field sampling and observations (i.e., Terrestrial Observation System (TOS) activities) will be conducted across 20 ecoclimatic domains in the U.S. where NEON is establishing 60 terrestrial research sites. NEON's TIS measurements and TOS activities are designed to observe the temporal and spatial dynamics of key drivers and ecological processes and responses to change within each of the 60 terrestrial research sites. The TIS measurements are non-destructive and designed to provide in-situ, continuous, and areally integrated observations of the surrounding ecosystem and environment, while TOS sampling and observation activities are designed to encompass a hierarchy of measurable biological states and processes including diversity, abundance, phenology, demography, infectious disease prevalence, ecohydrology, and biogeochemistry. To establish valid relationships between these drivers and site-specific responses, two contradicting requirements must be fulfilled: (i) both types of observations shall be representative of the same ecosystem, and (ii) they shall not significantly influence one another. Here we outline the theoretical background and algorithmic process for determining areas of mutual representativeness and exclusion around NEON's TIS measurements and develop a procedure which quantitatively optimizes this trade-off through: (i) quantifying the source area distributions of TIS measurements, (ii) determining the ratio of user-defined impact threshold to effective impact area for different TOS activities, and (iii) determining the range of feasible distances between TIS locations and TOS activities. This approach

  16. Hazard surveillance for workplace magnetic fields. 1: Walkaround sampling method for measuring ambient field magnitude; 2: Field characteristics from waveform measurements

    Energy Technology Data Exchange (ETDEWEB)

    Methner, M.M.; Bowman, J.D.

    1998-03-01

    Recent epidemiologic research has suggested that exposure to extremely low frequency (ELF) magnetic fields (MF) may be associated with leukemia, brain cancer, spontaneous abortions, and Alzheimer`s disease. A walkaround sampling method for measuring ambient ELF-MF levels was developed for use in conducting occupational hazard surveillance. This survey was designed to determine the range of MF levels at different industrial facilities so they could be categorized by MF levels and identified for possible subsequent personal exposure assessments. Industries were selected based on their annual electric power consumption in accordance with the hypothesis that large power consumers would have higher ambient MFs when compared with lower power consumers. Sixty-two facilities within thirteen 2-digit Standard Industrial Classifications (SIC) were selected based on their willingness to participate. A traditional industrial hygiene walkaround survey was conducted to identify MF sources, with a special emphasis on work stations.

  17. Isotachophoretic phenomena in electric field gradient focusing: perspectives for sample preparation and bioassays.

    Science.gov (United States)

    Quist, Jos; Vulto, Paul; Hankemeier, Thomas

    2014-05-06

    Isotachophoresis (ITP) and electric field gradient focusing (EFGF) are two powerful approaches for simultaneous focusing and separation of charged compounds. Remarkably, in many EFGF methods, isotachophoretic hallmarks have been found, including observations of plateau concentrations and contiguous analyte bands. We discuss the similarities between ITP and EFGF and describe promising possibilities to transfer the functionality and applications developed on one platform to other platforms. Of particular importance is the observation that single-electrolyte isotachophoretic separations with tunable ionic mobility window can be performed, as is illustrated with the example of depletion zone isotachophoresis (dzITP). By exploiting the rapid developments in micro- and nanofluidics, many interesting combinations of ITP and EFGF features can be achieved, yielding powerful analytical platforms for sample preparation, biomarker discovery, molecular interaction assays, drug screening, and clinical diagnostics.

  18. FIELD-DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    Energy Technology Data Exchange (ETDEWEB)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-09-12

    Methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of aqueous spent fuel storage basins and determine the oxide thickness on the spent fuel basin materials were developed to assess the corrosion potential of a basin. this assessment can then be used to determine the amount of time fuel has spent in a storage basin to ascertain if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations and assist in evaluating general storage basin operations. The test kit was developed based on the identification of key physical, chemical and microbiological parameters identified using a review of the scientific and basin operations literature. The parameters were used to design bench scale test cells for additional corrosion analyses, and then tools were purchased to analyze the key parameters. The tools were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The sampling kit consisted of a total organic carbon analyzer, an YSI multiprobe, and a thickness probe. The tools were field tested to determine their ease of use, reliability, and determine the quality of data that each tool could provide. Characterization confirmed that the L Area basin is a well operated facility with low corrosion potential.

  19. Sediment grain size estimation using airborne remote sensing, field sampling, and robust statistic.

    Science.gov (United States)

    Castillo, Elena; Pereda, Raúl; Luis, Julio Manuel de; Medina, Raúl; Viguri, Javier

    2011-10-01

    Remote sensing has been used since the 1980s to study parameters in relation with coastal zones. It was not until the beginning of the twenty-first century that it started to acquire imagery with good temporal and spectral resolution. This has encouraged the development of reliable imagery acquisition systems that consider remote sensing as a water management tool. Nevertheless, the spatial resolution that it provides is not adapted to carry out coastal studies. This article introduces a new methodology for estimating the most fundamental physical property of intertidal sediment, the grain size, in coastal zones. The study combines hyperspectral information (CASI-2 flight), robust statistic, and simultaneous field work (chemical and radiometric sampling), performed over Santander Bay, Spain. Field data acquisition was used to build a spectral library in order to study different atmospheric correction algorithms for CASI-2 data and to develop algorithms to estimate grain size in an estuary. Two robust estimation techniques (MVE and MCD multivariate M-estimators of location and scale) were applied to CASI-2 imagery, and the results showed that robust adjustments give acceptable and meaningful algorithms. These adjustments have given the following R(2) estimated results: 0.93 in the case of sandy loam contribution, 0.94 for the silty loam, and 0.67 for clay loam. The robust statistic is a powerful tool for large dataset.

  20. Efficient time-sampling method in Coulomb-corrected strong-field approximation.

    Science.gov (United States)

    Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You

    2016-11-01

    One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.

  1. Social networks and alcohol use disorders: findings from a nationally representative sample

    Science.gov (United States)

    Mowbray, Orion; Quinn, Adam; Cranford, James A.

    2014-01-01

    Background While some argue that social network ties of individuals with alcohol use disorders (AUD) are robust, there is evidence to suggest that individuals with AUDs have few social network ties, which are a known risk factor for health and wellness. Objectives Social network ties to friends, family, co-workers and communities of individuals are compared among individuals with a past-year diagnosis of alcohol dependence or alcohol abuse to individuals with no lifetime diagnosis of AUD. Method Respondents from Wave 2 of the National Epidemiologic Survey on Alcohol Related Conditions (NESARC) were assessed for the presence of past-year alcohol dependence or past-year alcohol abuse, social network ties, sociodemographics and clinical characteristics. Results Bivariate analyses showed that both social network size and social network diversity was significantly smaller among individuals with alcohol dependence, compared to individuals with alcohol abuse or no AUD. When social and clinical factors related to AUD status were controlled, multinomial logistic models showed that social network diversity remained a significant predictor of AUD status, while social network size did not differ among AUD groups. Conclusion Social networks of individuals with AUD may be different than individuals with no AUD, but this claim is dependent on specific AUD diagnosis and how social networks are measured. PMID:24405256

  2. From field notes to data portal - An operational QA/QC framework for tower networks

    Science.gov (United States)

    Sturtevant, C.; Hackley, S.; Meehan, T.; Roberti, J. A.; Holling, G.; Bonarrigo, S.

    2016-12-01

    Quality assurance and control (QA/QC) is one of the most important yet challenging aspects of producing research-quality data. This is especially so for environmental sensor networks collecting numerous high-frequency measurement streams at distributed sites. Here, the quality issues are multi-faceted, including sensor malfunctions, unmet theoretical assumptions, and measurement interference from the natural environment. To complicate matters, there are often multiple personnel managing different sites or different steps in the data flow. For large, centrally managed sensor networks such as NEON, the separation of field and processing duties is in the extreme. Tower networks such as Ameriflux, ICOS, and NEON continue to grow in size and sophistication, yet tools for robust, efficient, scalable QA/QC have lagged. Quality control remains a largely manual process relying on visual inspection of the data. In addition, notes of observed measurement interference or visible problems are often recorded on paper without an explicit pathway to data flagging during processing. As such, an increase in network size requires a near-proportional increase in personnel devoted to QA/QC, quickly stressing the human resources available. There is a need for a scalable, operational QA/QC framework that combines the efficiency and standardization of automated tests with the power and flexibility of visual checks, and includes an efficient communication pathway from field personnel to data processors to end users. Here we propose such a framework and an accompanying set of tools in development, including a mobile application template for recording tower maintenance and an R/shiny application for efficiently monitoring and synthesizing data quality issues. This framework seeks to incorporate lessons learned from the Ameriflux community and provide tools to aid continued network advancements.

  3. Quality assurance guidance for field sampling and measurement assessment plates in support of EM environmental sampling and analysis activities

    Energy Technology Data Exchange (ETDEWEB)

    1994-05-01

    This document is one of several guidance documents developed by the US Department of Energy (DOE) Office of Environmental Restoration and Waste Management (EM). These documents support the EM Analytical Services Program (ASP) and are based on applicable regulatory requirements and DOE Orders. They address requirements in DOE Orders by providing guidance that pertains specifically to environmental restoration and waste management sampling and analysis activities. DOE 5700.6C Quality Assurance (QA) defines policy and requirements to establish QA programs ensuring that risks and environmental impacts are minimized and that safety, reliability, and performance are maximized. This is accomplished through the application of effective management systems commensurate with the risks imposed by the facility and the project. Every organization supporting EM`s environmental sampling and analysis activities must develop and document a QA program. Management of each organization is responsible for appropriate QA program implementation, assessment, and improvement. The collection of credible and cost-effective environmental data is critical to the long-term success of remedial and waste management actions performed at DOE facilities. Only well established and management supported assessment programs within each EM-support organization will enable DOE to demonstrate data quality. The purpose of this series of documents is to offer specific guidance for establishing an effective assessment program for EM`s environmental sampling and analysis (ESA) activities.

  4. Exponentially Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians.

    Science.gov (United States)

    Mandrà, Salvatore; Zhu, Zheng; Katzgraber, Helmut G

    2017-02-17

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated with a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)NJOPFM1367-263010.1088/1367-2630/11/7/073021]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  5. Network as transconcept: elements for a conceptual demarcation in the field of public health.

    Science.gov (United States)

    Amaral, Carlos Eduardo Menezes; Bosi, Maria Lúcia Magalhães

    2016-08-22

    The main proposal to set up an articulated mode of operation of health services has been the concept of network, which has been appropriated in different ways in the field of public health, as it is used in other disciplinary fields or even taking it from common sense. Amid the diversity of uses and concepts, we recognize the need for rigorous conceptual demarcation about networks in the field of health. Such concern aims to preserve the strategic potential of this concept in the research and planning in the field, overcoming uncertainties and distortions still observed in its discourse-analytic circulation in public health. To this end, we will introduce the current uses of network in different disciplinary fields, emphasizing dialogues with the field of public health. With this, we intend to stimulate discussions about the development of empirical dimensions and analytical models that may allow us to understand the processes produced within and around health networks. RESUMO A principal proposta para configurar um modo articulado de funcionamento dos serviços de saúde tem sido o conceito de rede, que vem sendo apropriado de diferentes formas no campo da saúde coletiva, conforme seu emprego em outros campos disciplinares ou mesmo tomando-o do senso comum. Em meio à pluralidade de usos e concepções, reconhecemos a necessidade de rigorosa demarcação conceitual acerca de redes no campo da saúde. Tal preocupação visa a preservar o potencial estratégico desse conceito na investigação e planificação no campo, superando precariedades e distorções ainda observadas em sua circulação discursivo-analítica na saúde coletiva. Para tanto, apresentaremos os usos correntes de rede em diferentes campos disciplinares, destacando interlocuções com o campo da saúde coletiva. Com isso, pretendemos estimular o debate acerca do desenvolvimento de dimensões empíricas e modelos de análise que permitam compreender os processos produzidos no interior e ao redor

  6. Social Networks and Risk for Depressive Symptoms in a National Sample of Sexual Minority Youth

    Science.gov (United States)

    Hatzenbuehler, Mark L.; McLaughlin, Katie A.; Xuan, Ziming

    2012-01-01

    The aim of the study was to examine the social networks of sexual minority youths and to determine the associations between social networks and depressive symptoms. Data were obtained from the National Longitudinal Study of Adolescent Health (Add Health), a nationally representative cohort study of American adolescents (N=14,212). Wave 1 (1994–1995) collected extensive information about the social networks of participants through peer nomination inventories, as well as measures of sexual minority status and depressive symptoms. Using social network data, we examined three characteristics of adolescents’ social relationships: (1) social isolation; (2) degree of connectedness; and (3) social status. Sexual minority youths, particularly females, were more isolated, less connected, and had lower social status in peer networks than opposite-sex attracted youths. Among sexual minority male (but not female) youths, greater isolation as well as lower connectedness and status within a network were associated with greater depressive symptoms. Moreover, greater isolation in social networks partially explained the association between sexual minority status and depressive symptoms among males. Finally, a significant 3-way interaction indicated that the association between social isolation and depression was stronger for sexual minority male youths than non-minority youths and sexual minority females. These results suggest that the social networks in which sexual minority male youths are embedded may confer risk for depressive symptoms, underscoring the importance of considering peer networks in both research and interventions targeting sexual minority male adolescents. PMID:22771037

  7. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach

    Directory of Open Access Journals (Sweden)

    Gabriele Ferri

    2015-12-01

    Full Text Available This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality, used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support. The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided

  8. Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.

    Science.gov (United States)

    Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto

    2015-12-26

    This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So

  9. Development of sampling approaches for the determination of the presence of genetically modified organisms at the field level.

    Science.gov (United States)

    Sustar-Vozlic, Jelka; Rostohar, Katja; Blejec, Andrej; Kozjak, Petra; Cergan, Zoran; Meglic, Vladimir

    2010-03-01

    In order to comply with the European Union regulatory threshold for the adventitious presence of genetically modified organisms (GMOs) in food and feed, it is important to trace GMOs from the field. Appropriate sampling methods are needed to accurately predict the presence of GMOs at the field level. A 2-year field experiment with two maize varieties differing in kernel colour was conducted in Slovenia. Based on the results of data mining analyses and modelling, it was concluded that spatial relations between the donor and receptor field were the most important factors influencing the distribution of outcrossing rate (OCR) in the field. The approach for estimation fitting function parameters in the receptor (non-GM) field at two distances from the donor (GM) field (10 and 25 m) for estimation of the OCR (GMO content) in the whole receptor field was developed. Different sampling schemes were tested; a systematic random scheme in rows was proposed to be applied for sampling at the two distances for the estimation of fitting function parameters for determination of OCR. The sampling approach had already been validated with some other OCR data and was practically applied in the 2009 harvest in Poland. The developed approach can be used for determination of the GMO presence at the field level and for making appropriate labelling decisions. The importance of this approach lies in its possibility to also address other threshold levels beside the currently prescribed labelling threshold of 0.9% for food and feed.

  10. Study of brain functional network based on sample entropy of EEG under magnetic stimulation at PC6 acupoint.

    Science.gov (United States)

    Guo, Lei; Wang, Yao; Yu, Hongli; Yin, Ning; Li, Ying

    2014-01-01

    Acupuncture is based on the theory of traditional Chinese medicine. Its therapeutic effectiveness has been proved by clinical practice. However, its mechanism of action is still unclear. Magnetic stimulation at acupuncture point provides a new means for studying the theory of acupuncture. Based on the Graph Theory, the construction and analysis method of complex network can help to investigate the topology of brain functional network and understand the working mechanism of brain. In this study, magnetic stimulation was used to stimulate Neiguan (PC6) acupoint and the EEG (Electroencephalograph) signal was recorded. Using non-linear method (Sample Entropy) and complex network theory, brain functional network based on EEG signal under magnetic stimulation at PC6 acupoint was constructed and analyzed. In addition, the features of complex network were comparatively analyzed between the quiescent and stimulated states. Our experimental results show the topology of the network is changed, the connection of the network is enhanced, the efficiency of information transmission is improved and the small-world property is strengthened through stimulating the PC6 acupoint.

  11. Visualization maps for the evolution of research hotspots in the field of regional health information networks.

    Science.gov (United States)

    Wang, Yanjun; Zheng, Jianzhong; Zhang, Ailian; Zhou, Wei; Dong, Haiyuan

    2017-04-11

    The aim of this study was to reveal research hotspots in the field of regional health information networks (RHINs) and use visualization techniques to explore their evolution over time and differences between countries. We conducted a literature review for a 50-year period and compared the prevalence of certain index terms during the periods 1963-1993 and 1994-2014 and in six countries. We applied keyword frequency analysis, keyword co-occurrence analysis, multidimensional scaling analysis, and network visualization technology. The total number of keywords was found to increase with time. From 1994 to 2014, the research priorities shifted from hospital planning to community health planning. The number of keywords reflecting information-based research increased. The density of the knowledge network increased significantly, and partial keywords condensed into knowledge groups. All six countries focus on keywords including Information Systems; Telemedicine; Information Service; Medical Records Systems, Computerized; Internet; etc.; however, the level of development and some research priorities are different. RHIN research has generally increased in popularity over the past 50 years. The research hotspots are evolving and are at different levels of development in different countries. Knowledge network mapping and perceptual maps provide useful information for scholars, managers, and policy-makers.

  12. Assessing five field sampling methods to monitor Yellowstone National Park's northern ungulate winter range: the advantages and disadvantages of implementing a new sampling protocol

    Science.gov (United States)

    Pamela G. Sikkink; Roy Renkin; Geneva Chong; Art Sikkink

    2013-01-01

    The five field sampling methods tested for this study differed in richness and Simpson's Index values calculated from the raw data. How much the methods differed, and which ones were most similar to each other, depended on which diversity measure and which type of data were used for comparisons. When the number of species (richness) was used as a measure of...

  13. Training Valence, Instrumentality, and Expectancy Scale (T-VIES-it): Factor Structure and Nomological Network in an Italian Sample

    Science.gov (United States)

    Zaniboni, Sara; Fraccaroli, Franco; Truxillo, Donald M.; Bertolino, Marilena; Bauer, Talya N.

    2011-01-01

    Purpose: The purpose of this study is to validate, in an Italian sample, a multidimensional training motivation measure (T-VIES-it) based on expectancy (VIE) theory, and to examine the nomological network surrounding the construct. Design/methodology/approach: Using a cross-sectional design study, 258 public sector employees in Northeast Italy…

  14. A replica exchange transition interface sampling method with multiple interface sets for investigating networks of rare events

    NARCIS (Netherlands)

    Swenson, D.W.H.; Bolhuis, P.G.

    2014-01-01

    The multiple state transition interface sampling (TIS) framework in principle allows the simulation of a large network of complex rare event transitions, but in practice suffers from convergence problems. To improve convergence, we combine multiple state TIS [J. Rogal and P. G. Bolhuis, J. Chem.

  15. Estimating Route Choice Models from Stochastically Generated Choice Sets on Large-Scale Networks Correcting for Unequal Sampling Probability

    DEFF Research Database (Denmark)

    Vacca, Alessandro; Prato, Carlo Giacomo; Meloni, Italo

    2015-01-01

    is the dependency of the parameter estimates from the choice set generation technique. Bias introduced in model estimation has been corrected only for the random walk algorithm, which has problematic applicability to large-scale networks. This study proposes a correction term for the sampling probability of routes...

  16. A Comparison of the Social Networks of Blacks and Whites in a Sample of Elderly in a Southern Border State.

    Science.gov (United States)

    Kernodle, R. Wayne; Kernodle, Ruth L.

    The social network of elderly blacks was compared with whites in a sample of 241 ambulatory persons interviewed in congregate settings in a planning district of a border Southern state. Questions were asked about monthly patterns of social interaction, such as visiting and phone contacts with children, other kin, neighbors, friends, involvement in…

  17. Convolutional Neural Networks with Batch Normalization for Classifying Hi-hat, Snare, and Bass Percussion Sound Samples

    DEFF Research Database (Denmark)

    Gajhede, Nicolai; Beck, Oliver; Purwins, Hendrik

    2016-01-01

    After having revolutionized image and speech processing, convolu- tional neural networks (CNN) are now starting to become more and more successful in music information retrieval as well. We compare four CNN types for classifying a dataset of more than 3000 acoustic and synthesized samples...

  18. The Use of Social Network Sites by Prospective Physical Education and Sports Teachers (Gazi University Sample)

    Science.gov (United States)

    Yaman, Metin; Yaman, Cetin

    2014-01-01

    Social network sites are widely used by many people nowadays for various aims. Many researches have been done to analyze the usage of these sites in many different settings. In the literature the number of the studies investigating the university students' usage social network sites is limited. This research was carried out to determine the social…

  19. Evaluation of Freshness of Soft Tissue Samples with Optical Coherence Tomography Assisted by Low Frequency Electric Field

    OpenAIRE

    A. Pena; A. Sadovoy; A. Doronin; A. Bykov; I. Meglinski

    2015-01-01

    We present an optical coherence tomography based methodology to determine freshness of soft tissue samples by evaluation of their interaction with low frequency electric field. Various biological tissues samples of different stages of freshness were exposed by low frequency electric current. The influence of the low frequency electric field on tissues was observed and quantified by the double correlation optical coherence tomography (dcOCT) approach developed in house. The quantitative evalua...

  20. Woodbridge research facility remedial investigation/feasibility study. Sampling and analysis plan vol 1: Field sampling plan vol II: Quality assurance project plan. Addendum 1

    Energy Technology Data Exchange (ETDEWEB)

    Wisbeck, D.; Thompson, P.; Williams, T.; Ehlers, M.; Eliass, M.

    1996-09-01

    U.S. Army Woodbridge Research Facility (WRF) was used in the past as a major military communications center and a research and development laboratory where electromagnetic pulse energy was tested on military and other equipment. WRF is presently an inactive facility pursuant to the 1991 Base Realignment and Closure list. Past investigation activities indicate that polychlorinated biphenyl compounds (PCBs) are the primary chemicals of concern. This task calls for provision of the necessary staff and equipment to provide remedial investigation/feasibility study support for the USAEC BRAC Program investigation at WRF. This Sampling and Analysis Plan, Addendum 1, Field Sampling Plan presents the sample location and rationale for additional samples required to complete the RI/FS; and the Quality Assurance Project Plan presents any additional data quality objectives and proposed laboratory methods for chemical analysis of samples.

  1. Interpretation of conduit voltage measurements on the Poloidal Field Insert Sample using the CUDI-CICC numerical code

    NARCIS (Netherlands)

    Ilyin, Y.; Nijhuis, Arend; ten Kate, Herman H.J.

    2006-01-01

    The results of simulations with the CUDI–CICC code on the poloidal field insert sample (PFIS) tested in the SULTAN test facility are presented. The interpretations are based on current distribution analysis from self-field measurements with Hall sensor arrays and current sharing measurements. The

  2. Relative localization in wireless sensor networks for measurement of electric fields under HVDC transmission lines.

    Science.gov (United States)

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-02-04

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions.

  3. Relative Localization in Wireless Sensor Networks for Measurement of Electric Fields under HVDC Transmission Lines

    Science.gov (United States)

    Cui, Yong; Wang, Qiusheng; Yuan, Haiwen; Song, Xiao; Hu, Xuemin; Zhao, Luxing

    2015-01-01

    In the wireless sensor networks (WSNs) for electric field measurement system under the High-Voltage Direct Current (HVDC) transmission lines, it is necessary to obtain the electric field distribution with multiple sensors. The location information of each sensor is essential to the correct analysis of measurement results. Compared with the existing approach which gathers the location information by manually labelling sensors during deployment, the automatic localization can reduce the workload and improve the measurement efficiency. A novel and practical range-free localization algorithm for the localization of one-dimensional linear topology wireless networks in the electric field measurement system is presented. The algorithm utilizes unknown nodes' neighbor lists based on the Received Signal Strength Indicator (RSSI) values to determine the relative locations of nodes. The algorithm is able to handle the exceptional situation of the output permutation which can effectively improve the accuracy of localization. The performance of this algorithm under real circumstances has been evaluated through several experiments with different numbers of nodes and different node deployments in the China State Grid HVDC test base. Results show that the proposed algorithm achieves an accuracy of over 96% under different conditions. PMID:25658390

  4. Observation of soil moisture variability in agricultural and grassland field soils using a wireless sensor network

    Science.gov (United States)

    Priesack, Eckart; Schuh, Max

    2014-05-01

    Soil moisture dynamics is a key factor of energy and matter exchange between land surface and atmosphere. Therefore long-term observation of temporal and spatial soil moisture variability is important in studying impacts of climate change on terrestrial ecosystems and their possible feedbacks to the atmosphere. Within the framework of the network of terrestrial environmental observatories TERENO we installed at the research farm Scheyern in soils of two fields (of ca. 5 ha size each) the SoilNet wireless sensor network (Biogena et al. 2010). The SoilNet in Scheyern consists of 94 sensor units, 45 for the agricultural field site and 49 for the grassland site. Each sensor unit comprises 6 SPADE sensors, two sensors placed at the depths 10, 30 and 50 cm. The SPADE sensor (sceme.de GmbH, Horn-Bad Meinberg Germany) consists of a TDT sensor to estimate volumetric soil water content from soil electrical permittivity by sending an electromagnetic signal and measuring its propagation time, which depends on the soil dielectric properties and hence on soil water content. Additionally the SPADE sensor contains a temperature sensor (DS18B20). First results obtained from the SoilNet measurements at both fields sites will be presented and discussed. The observed high temporal and spatial variability will be analysed and related to agricultural management and basic soil properties (bulk density, soil texture, organic matter content and soil hydraulic characteristics).

  5. Bayesian Markov Random Field analysis for protein function prediction based on network data.

    Science.gov (United States)

    Kourmpetis, Yiannis A I; van Dijk, Aalt D J; Bink, Marco C A M; van Ham, Roeland C H J; ter Braak, Cajo J F

    2010-02-24

    Inference of protein functions is one of the most important aims of modern biology. To fully exploit the large volumes of genomic data typically produced in modern-day genomic experiments, automated computational methods for protein function prediction are urgently needed. Established methods use sequence or structure similarity to infer functions but those types of data do not suffice to determine the biological context in which proteins act. Current high-throughput biological experiments produce large amounts of data on the interactions between proteins. Such data can be used to infer interaction networks and to predict the biological process that the protein is involved in. Here, we develop a probabilistic approach for protein function prediction using network data, such as protein-protein interaction measurements. We take a Bayesian approach to an existing Markov Random Field method by performing simultaneous estimation of the model parameters and prediction of protein functions. We use an adaptive Markov Chain Monte Carlo algorithm that leads to more accurate parameter estimates and consequently to improved prediction performance compared to the standard Markov Random Fields method. We tested our method using a high quality S. cereviciae validation network with 1622 proteins against 90 Gene Ontology terms of different levels of abstraction. Compared to three other protein function prediction methods, our approach shows very good prediction performance. Our method can be directly applied to protein-protein interaction or coexpression networks, but also can be extended to use multiple data sources. We apply our method to physical protein interaction data from S. cerevisiae and provide novel predictions, using 340 Gene Ontology terms, for 1170 unannotated proteins and we evaluate the predictions using the available literature.

  6. DEVELOPMENT OF METHODOLOGY AND FIELD DEPLOYABLE SAMPLING TOOLS FOR SPENT NUCLEAR FUEL INTERROGATION IN LIQUID STORAGE

    Energy Technology Data Exchange (ETDEWEB)

    Berry, T.; Milliken, C.; Martinez-Rodriguez, M.; Hathcock, D.; Heitkamp, M.

    2012-06-04

    This project developed methodology and field deployable tools (test kits) to analyze the chemical and microbiological condition of the fuel storage medium and determine the oxide thickness on the spent fuel basin materials. The overall objective of this project was to determine the amount of time fuel has spent in a storage basin to determine if the operation of the reactor and storage basin is consistent with safeguard declarations or expectations. This project developed and validated forensic tools that can be used to predict the age and condition of spent nuclear fuels stored in liquid basins based on key physical, chemical and microbiological basin characteristics. Key parameters were identified based on a literature review, the parameters were used to design test cells for corrosion analyses, tools were purchased to analyze the key parameters, and these were used to characterize an active spent fuel basin, the Savannah River Site (SRS) L-Area basin. The key parameters identified in the literature review included chloride concentration, conductivity, and total organic carbon level. Focus was also placed on aluminum based cladding because of their application to weapons production. The literature review was helpful in identifying important parameters, but relationships between these parameters and corrosion rates were not available. Bench scale test systems were designed, operated, harvested, and analyzed to determine corrosion relationships between water parameters and water conditions, chemistry and microbiological conditions. The data from the bench scale system indicated that corrosion rates were dependent on total organic carbon levels and chloride concentrations. The highest corrosion rates were observed in test cells amended with sediment, a large microbial inoculum and an organic carbon source. A complete characterization test kit was field tested to characterize the SRS L-Area spent fuel basin. The sampling kit consisted of a TOC analyzer, a YSI

  7. Sampled-Data Synchronization of Markovian Coupled Neural Networks With Mode Delays Based on Mode-Dependent LKF.

    Science.gov (United States)

    Wang, Junyi; Zhang, Huaguang; Wang, Zhanshan; Liu, Zhenwei

    This paper investigates sampled-data synchronization problem of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals based on an enhanced input delay approach. A mode-dependent augmented Lyapunov-Krasovskii functional (LKF) is utilized, which makes the LKF matrices mode-dependent as much as possible. By applying an extended Jensen's integral inequality and Wirtinger's inequality, new delay-dependent synchronization criteria are obtained, which fully utilizes the upper bound on variable sampling interval and the sawtooth structure information of varying input delay. In addition, the desired stochastic sampled-data controllers can be obtained by solving a set of linear matrix inequalities. Finally, two examples are provided to demonstrate the feasibility of the proposed method.This paper investigates sampled-data synchronization problem of Markovian coupled neural networks with mode-dependent interval time-varying delays and aperiodic sampling intervals based on an enhanced input delay approach. A mode-dependent augmented Lyapunov-Krasovskii functional (LKF) is utilized, which makes the LKF matrices mode-dependent as much as possible. By applying an extended Jensen's integral inequality and Wirtinger's inequality, new delay-dependent synchronization criteria are obtained, which fully utilizes the upper bound on variable sampling interval and the sawtooth structure information of varying input delay. In addition, the desired stochastic sampled-data controllers can be obtained by solving a set of linear matrix inequalities. Finally, two examples are provided to demonstrate the feasibility of the proposed method.

  8. Wide field imaging - I. Applications of neural networks to object detection and star/galaxy classification

    Science.gov (United States)

    Andreon, S.; Gargiulo, G.; Longo, G.; Tagliaferri, R.; Capuano, N.

    2000-12-01

    Astronomical wide-field imaging performed with new large-format CCD detectors poses data reduction problems of unprecedented scale, which are difficult to deal with using traditional interactive tools. We present here NExt (Neural Extractor), a new neural network (NN) based package capable of detecting objects and performing both deblending and star/galaxy classification in an automatic way. Traditionally, in astronomical images, objects are first distinguished from the noisy background by searching for sets of connected pixels having brightnesses above a given threshold; they are then classified as stars or as galaxies through diagnostic diagrams having variables chosen according to the astronomer's taste and experience. In the extraction step, assuming that images are well sampled, NExt requires only the simplest a priori definition of `what an object is' (i.e. it keeps all structures composed of more than one pixel) and performs the detection via an unsupervised NN, approaching detection as a clustering problem that has been thoroughly studied in the artificial intelligence literature. The first part of the NExt procedure consists of an optimal compression of the redundant information contained in the pixels via a mapping from pixel intensities to a subspace individualized through principal component analysis. At magnitudes fainter than the completeness limit, stars are usually almost indistinguishable from galaxies, and therefore the parameters characterizing the two classes do not lie in disconnected subspaces, thus preventing the use of unsupervised methods. We therefore adopted a supervised NN (i.e. a NN that first finds the rules to classify objects from examples and then applies them to the whole data set). In practice, each object is classified depending on its membership of the regions mapping the input feature space in the training set. In order to obtain an objective and reliable classification, instead of using an arbitrarily defined set of features

  9. Theoretically informed Monte Carlo simulation of liquid crystals by sampling of alignment-tensor fields

    Energy Technology Data Exchange (ETDEWEB)

    Armas-Pérez, Julio C.; Londono-Hurtado, Alejandro [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Guzmán, Orlando [Departamento de Física, Universidad Autónoma Metropolitana, Iztapalapa, DF 09340, México (Mexico); Hernández-Ortiz, Juan P. [Departamento de Materiales y Minerales, Universidad Nacional de Colombia, Sede Medellín, Medellín (Colombia); Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Pablo, Juan J. de, E-mail: depablo@uchicago.edu [Institute for Molecular Engineering, University of Chicago, Chicago, Illinois 60637 (United States); Materials Science Division, Argonne National Laboratory, Argonne, Illinois 60439 (United States)

    2015-07-28

    A theoretically informed coarse-grained Monte Carlo method is proposed for studying liquid crystals. The free energy functional of the system is described in the framework of the Landau-de Gennes formalism. The alignment field and its gradients are approximated by finite differences, and the free energy is minimized through a stochastic sampling technique. The validity of the proposed method is established by comparing the results of the proposed approach to those of traditional free energy minimization techniques. Its usefulness is illustrated in the context of three systems, namely, a nematic liquid crystal confined in a slit channel, a nematic liquid crystal droplet, and a chiral liquid crystal in the bulk. It is found that for systems that exhibit multiple metastable morphologies, the proposed Monte Carlo method is generally able to identify lower free energy states that are often missed by traditional approaches. Importantly, the Monte Carlo method identifies such states from random initial configurations, thereby obviating the need for educated initial guesses that can be difficult to formulate.

  10. Microcystin-Bound Protein Patterns in Different Cultures of Microcystis aeruginosa and Field Samples.

    Science.gov (United States)

    Wei, Nian; Hu, Lili; Song, Lirong; Gan, Nanqin

    2016-10-12

    Micocystin (MC) exists in Microcystis cells in two different forms, free and protein-bound. We examined the dynamic change in extracellular free MCs, intracellular free MCs and protein-bound MCs in both batch cultures and semi-continuous cultures, using high performance liquid chromatography and Western blot. The results showed that the free MC per cell remained constant, while the quantity of protein-bound MCs increased with the growth of Microcystis cells in both kinds of culture. Significant changes in the dominant MC-bound proteins occurred in the late exponential growth phase of batch cultures, while the dominant MC-bound proteins in semi-continuous cultures remained the same. In field samples collected at different months in Lake Taihu, the dominant MC-bound proteins were shown to be similar, but the amount of protein-bound MC varied and correlated with the intracellular MC content. We identified MC-bound proteins by two-dimensional electrophoresis immunoblots and mass spectrometry. The 60 kDa chaperonin GroEL was a prominent MC-bound protein. Three essential glycolytic enzymes and ATP synthase alpha subunit were also major targets of MC-binding, which might contribute to sustained growth in semi-continuous culture. Our results indicate that protein-bound MC may be important for sustaining growth and adaptation of Microcystis sp.

  11. Pre-Mission Input Requirements to Enable Successful Sample Collection by A Remote Field/EVA Team

    Science.gov (United States)

    Cohen, B. A.; Lim, D. S. S.; Young, K. E.; Brunner, A.; Elphic, R. E.; Horne, A.; Kerrigan, M. C.; Osinski, G. R.; Skok, J. R.; Squyres, S. W.; hide

    2016-01-01

    The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team, part of the Solar System Exploration Virtual Institute (SSERVI), is a field-based research program aimed at generating strategic knowledge in preparation for human and robotic exploration of the Moon, near-Earth asteroids, Phobos and Deimos, and beyond. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused and, moreover, is sampling-focused with the explicit intent to return the best samples for geochronology studies in the laboratory. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. We examined the in situ sample characterization and real-time decision-making process of the astronauts, with a guiding hypothesis that pre-mission training that included detailed background information on the analytical fate of a sample would better enable future astronauts to select samples that would best meet science requirements. We conducted three tests of this hypothesis over several days in the field. Our investigation was designed to document processes, tools and procedures for crew sampling of planetary targets. This was not meant to be a blind, controlled test of crew efficacy, but rather an effort to explicitly recognize the relevant variables that enter into sampling protocol and to be able to develop recommendations for crew and backroom training in future endeavors.

  12. Reconstruction of enhancer-target networks in 935 samples of human primary cells, tissues and cell lines.

    Science.gov (United States)

    Cao, Qin; Anyansi, Christine; Hu, Xihao; Xu, Liangliang; Xiong, Lei; Tang, Wenshu; Mok, Myth T S; Cheng, Chao; Fan, Xiaodan; Gerstein, Mark; Cheng, Alfred S L; Yip, Kevin Y

    2017-10-01

    We propose a new method for determining the target genes of transcriptional enhancers in specific cells and tissues. It combines global trends across many samples and sample-specific information, and considers the joint effect of multiple enhancers. Our method outperforms existing methods when predicting the target genes of enhancers in unseen samples, as evaluated by independent experimental data. Requiring few types of input data, we are able to apply our method to reconstruct the enhancer-target networks in 935 samples of human primary cells, tissues and cell lines, which constitute by far the largest set of enhancer-target networks. The similarity of these networks from different samples closely follows their cell and tissue lineages. We discover three major co-regulation modes of enhancers and find defense-related genes often simultaneously regulated by multiple enhancers bound by different transcription factors. We also identify differentially methylated enhancers in hepatocellular carcinoma (HCC) and experimentally confirm their altered regulation of HCC-related genes.

  13. Data collection method for mobile sensor networks based on the theory of thermal fields.

    Science.gov (United States)

    Macuha, Martin; Tariq, Muhammad; Sato, Takuro

    2011-01-01

    Many sensor applications are aimed for mobile objects, where conventional routing approaches of data delivery might fail. Such applications are habitat monitoring, human probes or vehicular sensing systems. This paper targets such applications and proposes lightweight proactive distributed data collection scheme for Mobile Sensor Networks (MSN) based on the theory of thermal fields. By proper mapping, we create distribution function which allows considering characteristics of a sensor node. We show the functionality of our proposed forwarding method when adapted to the energy of sensor node. We also propose enhancement in order to maximize lifetime of the sensor nodes. We thoroughly evaluate proposed solution and discuss the tradeoffs.

  14. High-conductance states in a mean-field cortical network model

    CERN Document Server

    Lerchner, A; Hertz, J

    2004-01-01

    Measured responses from visual cortical neurons show that spike times tend to be correlated rather than exactly Poisson distributed. Fano factors vary and are usually greater than 1 due to the tendency of spikes being clustered into bursts. We show that this behavior emerges naturally in a balanced cortical network model with random connectivity and conductance-based synapses. We employ mean field theory with correctly colored noise to describe temporal correlations in the neuronal activity. Our results illuminate the connection between two independent experimental findings: high conductance states of cortical neurons in their natural environment, and variable non-Poissonian spike statistics with Fano factors greater than 1.

  15. Finite-temperature field theory and quantum noise in an electrical network

    Energy Technology Data Exchange (ETDEWEB)

    Garavaglia, T.

    1988-10-15

    Finite-temperature (0less than or equal toTfield (FTF) theory with an effective spectral Lagrangian density formulation is used to study quantum noise in an electrical network. Solutions for the finite second moments that satisfy the uncertainty principle bound are given for a dissipative quantum oscillator. A regularization method, based on the analysis of a semi-infinite low-pass filter, is employed, and it leads to results which differ from those of the Drude model. To illustrate the FTF method, an example is given using an ideal finite-temperature coherent state.

  16. Centralized Data-Sampling Approach for Global Ot-α Synchronization of Fractional-Order Neural Networks with Time Delays

    Directory of Open Access Journals (Sweden)

    Jin-E Zhang

    2017-01-01

    Full Text Available In this paper, the global O(t-α synchronization problem is investigated for a class of fractional-order neural networks with time delays. Taking into account both better control performance and energy saving, we make the first attempt to introduce centralized data-sampling approach to characterize the O(t-α synchronization design strategy. A sufficient criterion is given under which the drive-response-based coupled neural networks can achieve global O(t-α synchronization. It is worth noting that, by using centralized data-sampling principle, fractional-order Lyapunov-like technique, and fractional-order Leibniz rule, the designed controller performs very well. Two numerical examples are presented to illustrate the efficiency of the proposed centralized data-sampling scheme.

  17. Experiments with central-limit properties of spatial samples from locally covariant random fields

    Science.gov (United States)

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  18. Single vessel air injection estimates of xylem resistance to cavitation are affected by vessel network characteristics and sample length.

    Science.gov (United States)

    Venturas, Martin D; Rodriguez-Zaccaro, F Daniela; Percolla, Marta I; Crous, Casparus J; Jacobsen, Anna L; Pratt, R Brandon

    2016-10-01

    Xylem resistance to cavitation is an important trait that is related to the ecology and survival of plant species. Vessel network characteristics, such as vessel length and connectivity, could affect the spread of emboli from gas-filled vessels to functional ones, triggering their cavitation. We hypothesized that the cavitation resistance of xylem vessels is randomly distributed throughout the vessel network. We predicted that single vessel air injection (SVAI) vulnerability curves (VCs) would thus be affected by sample length. Longer stem samples were predicted to appear more resistant than shorter samples due to the sampled path including greater numbers of vessels. We evaluated the vessel network characteristics of grapevine (Vitis vinifera L.), English oak (Quercus robur L.) and black cottonwood (Populus trichocarpa Torr. & A. Gray), and constructed SVAI VCs for 5- and 20-cm-long segments. We also constructed VCs with a standard centrifuge method and used computer modelling to estimate the curve shift expected for pathways composed of different numbers of vessels. For all three species, the SVAI VCs for 5 cm segments rose exponentially and were more vulnerable than the 20 cm segments. The 5 cm curve shapes were exponential and were consistent with centrifuge VCs. Modelling data supported the observed SVAI VC shifts, which were related to path length and vessel network characteristics. These results suggest that exponential VCs represent the most realistic curve shape for individual vessel resistance distributions for these species. At the network level, the presence of some vessels with a higher resistance to cavitation may help avoid emboli spread during tissue dehydration. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Circulating persistent current and induced magnetic field in a fractal network

    Energy Technology Data Exchange (ETDEWEB)

    Saha, Srilekha [Condensed Matter Physics Division, Saha Institute of Nuclear Physics, Sector-I, Block-AF, Bidhannagar, Kolkata 700 064 (India); Maiti, Santanu K., E-mail: santanu.maiti@isical.ac.in [Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 Barrackpore Trunk Road, Kolkata 700 108 (India); Karmakar, S.N. [Condensed Matter Physics Division, Saha Institute of Nuclear Physics, Sector-I, Block-AF, Bidhannagar, Kolkata 700 064 (India)

    2016-04-29

    We present the overall conductance as well as the circulating currents in individual loops of a Sierpinski gasket (SPG) as we apply bias voltage via the side attached electrodes. SPG being a self-similar structure, its manifestation on loop currents and magnetic fields is examined in various generations of this fractal and it has been observed that for a given configuration of the electrodes, the physical quantities exhibit certain regularity as we go from one generation to another. Also a notable feature is the introduction of anisotropy in hopping causes an increase in magnitude of overall transport current. These features are a subject of interest in this article. - Highlights: • Voltage driven circular current is analyzed in a fractal network. • Current induced magnetic field is strong enough to flip a spin. • Anisotropy in hopping enhances overall transport current.

  20. Decomposition approach to the stability of recurrent neural networks with asynchronous time delays in quaternion field.

    Science.gov (United States)

    Zhang, Dandan; Kou, Kit Ian; Liu, Yang; Cao, Jinde

    2017-10-01

    In this paper, the global exponential stability for recurrent neural networks (QVNNs) with asynchronous time delays is investigated in quaternion field. Due to the non-commutativity of quaternion multiplication resulting from Hamilton rules: ij=-ji=k, jk=-kj=i, ki=-ik=j, ijk=i(2)=j(2)=k(2)=-1, the QVNN is decomposed into four real-valued systems, which are studied separately. The exponential convergence is proved directly accompanied with the existence and uniqueness of the equilibrium point to the consider systems. Combining with the generalized ∞-norm and Cauchy convergence property in the quaternion field, some sufficient conditions to guarantee the stability are established without using any Lyapunov-Krasovskii functional and linear matrix inequality. Finally, a numerical example is given to demonstrate the effectiveness of the results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Single-Walled Carbon Nanotube Network Field Effect Transistor as a Humidity Sensor

    Directory of Open Access Journals (Sweden)

    Prasantha R. Mudimela

    2012-01-01

    Full Text Available Single-walled carbon nanotube network field effect transistors were fabricated and studied as humidity sensors. Sensing responses were altered by changing the gate voltage. At the open channel state (negative gate voltage, humidity pulse resulted in the decrease of the source-drain current, and, vice versa, the increase in the source-drain current was observed at the positive gate voltage. This effect was explained by the electron-donating nature of water molecules. The operation speed and signal intensity was found to be dependent on the gate voltage polarity. The positive or negative change in current with humidity pulse at zero-gate voltage was found to depend on the previous state of the gate electrode (positive or negative voltage, respectively. Those characteristics were explained by the charge traps in the gate dielectric altering the effective gate voltage, which influenced the operation of field effect transistor.

  2. Utilizing neural networks in magnetic media modeling and field computation: A review.

    Science.gov (United States)

    Adly, Amr A; Abd-El-Hafiz, Salwa K

    2014-11-01

    Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs) have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  3. Utilizing neural networks in magnetic media modeling and field computation: A review

    Directory of Open Access Journals (Sweden)

    Amr A. Adly

    2014-11-01

    Full Text Available Magnetic materials are considered as crucial components for a wide range of products and devices. Usually, complexity of such materials is defined by their permeability classification and coupling extent to non-magnetic properties. Hence, development of models that could accurately simulate the complex nature of these materials becomes crucial to the multi-dimensional field-media interactions and computations. In the past few decades, artificial neural networks (ANNs have been utilized in many applications to perform miscellaneous tasks such as identification, approximation, optimization, classification and forecasting. The purpose of this review article is to give an account of the utilization of ANNs in modeling as well as field computation involving complex magnetic materials. Mostly used ANN types in magnetics, advantages of this usage, detailed implementation methodologies as well as numerical examples are given in the paper.

  4. Field Geologic Observation and Sample Collection Strategies for Planetary Surface Exploration: Insights from the 2010 Desert RATS Geologist Crewmembers

    Science.gov (United States)

    Hurtado, Jose M., Jr.; Young, Kelsey; Bleacher, Jacob E.; Garry, W. Brent; Rice, James W., Jr.

    2012-01-01

    Observation is the primary role of all field geologists, and geologic observations put into an evolving conceptual context will be the most important data stream that will be relayed to Earth during a planetary exploration mission. Sample collection is also an important planetary field activity, and its success is closely tied to the quality of contextual observations. To test protocols for doing effective planetary geologic field- work, the Desert RATS(Research and Technology Studies) project deployed two prototype rovers for two weeks of simulated exploratory traverses in the San Francisco volcanic field of northern Arizona. The authors of this paper represent the geologist crew members who participated in the 2010 field test.We document the procedures adopted for Desert RATS 2010 and report on our experiences regarding these protocols. Careful consideration must be made of various issues that impact the interplay between field geologic observations and sample collection, including time management; strategies relatedtoduplicationofsamplesandobservations;logisticalconstraintson the volume and mass of samples and the volume/transfer of data collected; and paradigms for evaluation of mission success. We find that the 2010 field protocols brought to light important aspects of each of these issues, and we recommend best practices and modifications to training and operational protocols to address them. Underlying our recommendations is the recognition that the capacity of the crew to flexibly execute their activities is paramount. Careful design of mission parameters, especially field geologic protocols, is critical for enabling the crews to successfully meet their science objectives.

  5. Application of Artificial Neural Networks to the Analysis of NORM Samples; Aplicación de las Redes Neuronales al Análisis de Muestras NORM

    Energy Technology Data Exchange (ETDEWEB)

    Moser, H.; Peyrés, V.; Mejuto, M.; García-Toraño, E.

    2015-07-01

    This work describes the application of artificial neural networks (ANNs) to analyze the raw data of gamma-ray spectra of NORM samples and decide if the activity content of a certain nuclide is above or below the exemption limit of 1 Bq/g. The main advantage of using an ANN for this purpose is that for the user no specialized knowledge in the field of gamma-ray spectrometry is necessary. In total a number of 635 spectra consisting of varying activity concentrations, seven different materials and three densities each have been generated by Monte Carlo simulation to provide training material to the ANN. These spectra have been created using the simulation code PENELOPE. Validation was carried out with a number of NORM samples previously characterized by conventional gamma-ray spectrometry with peak fitting.

  6. THE IMPORTANCE OF THE MAGNETIC FIELD FROM AN SMA-CSO-COMBINED SAMPLE OF STAR-FORMING REGIONS

    Energy Technology Data Exchange (ETDEWEB)

    Koch, Patrick M.; Tang, Ya-Wen; Ho, Paul T. P.; Chen, Huei-Ru Vivien; Liu, Hau-Yu Baobab; Yen, Hsi-Wei; Lai, Shih-Ping [Academia Sinica, Institute of Astronomy and Astrophysics, Taipei, Taiwan (China); Zhang, Qizhou; Chen, How-Huan; Ching, Tao-Chung [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Girart, Josep M. [Institut de Ciències de l' Espai, CSIC-IEEC, Campus UAB, Facultat de Ciències, C5p 2, 08193 Bellaterra, Catalonia (Spain); Frau, Pau [Observatorio Astronómico Nacional, Alfonso XII, 3 E-28014 Madrid (Spain); Li, Hua-Bai [Department of Physics, The Chinese University of Hong Kong (Hong Kong); Li, Zhi-Yun [Department of Astronomy, University of Virginia, P.O. Box 400325, Charlottesville, VA 22904 (United States); Padovani, Marco [Laboratoire Univers et Particules de Montpellier, UMR 5299 du CNRS, Université de Montpellier II, place E. Bataillon, cc072, F-34095 Montpellier (France); Qiu, Keping [School of Astronomy and Space Science, Nanjing University, 22 Hankou Road, Nanjiing 210093 (China); Rao, Ramprasad, E-mail: pmkoch@asiaa.sinica.edu.tw [Academia Sinica, Institute of Astronomy and Astrophysics, 645 N. Aohoku Place, Hilo, HI 96720 (United States)

    2014-12-20

    Submillimeter dust polarization measurements of a sample of 50 star-forming regions, observed with the Submillimeter Array (SMA) and the Caltech Submillimeter Observatory (CSO) covering parsec-scale clouds to milliparsec-scale cores, are analyzed in order to quantify the magnetic field importance. The magnetic field misalignment δ—the local angle between magnetic field and dust emission gradient—is found to be a prime observable, revealing distinct distributions for sources where the magnetic field is preferentially aligned with or perpendicular to the source minor axis. Source-averaged misalignment angles (|δ|) fall into systematically different ranges, reflecting the different source-magnetic field configurations. Possible bimodal (|δ|) distributions are found for the separate SMA and CSO samples. Combining both samples broadens the distribution with a wide maximum peak at small (|δ|) values. Assuming the 50 sources to be representative, the prevailing source-magnetic field configuration is one that statistically prefers small magnetic field misalignments |δ|. When interpreting |δ| together with a magnetohydrodynamics force equation, as developed in the framework of the polarization-intensity gradient method, a sample-based log-linear scaling fits the magnetic field tension-to-gravity force ratio (Σ {sub B}) versus (|δ|) with (Σ {sub B}) = 0.116 · exp (0.047 · (|δ|)) ± 0.20 (mean error), providing a way to estimate the relative importance of the magnetic field, only based on measurable field misalignments |δ|. The force ratio Σ {sub B} discriminates systems that are collapsible on average ((Σ {sub B}) < 1) from other molecular clouds where the magnetic field still provides enough resistance against gravitational collapse ((Σ {sub B}) > 1). The sample-wide trend shows a transition around (|δ|) ≈ 45°. Defining an effective gravitational force ∼1 – (Σ {sub B}), the average magnetic-field-reduced star formation efficiency is at least a

  7. Sampling, storage, and analysis of C2-C7 non-methane hydrocarbons from the US National Oceanic and Atmospheric Administration Cooperative Air Sampling Network glass flasks.

    Science.gov (United States)

    Pollmann, Jan; Helmig, Detlev; Hueber, Jacques; Plass-Dülmer, Christian; Tans, Pieter

    2008-04-25

    An analytical technique was developed to analyze light non-methane hydrocarbons (NMHC), including ethane, propane, iso-butane, n-butane, iso-pentane, n-pentane, n-hexane, isoprene, benzene and toluene from whole air samples collected in 2.5l-glass flasks used by the National Oceanic and Atmospheric Administration, Earth System Research Laboratory, Global Monitoring Division (NOAA ESRL GMD, Boulder, CO, USA) Cooperative Air Sampling Network. This method relies on utilizing the remaining air in these flasks (which is at below-ambient pressure at this stage) after the completion of all routine greenhouse gas measurements from these samples. NMHC in sample aliquots extracted from the flasks were preconcentrated with a custom-made, cryogen-free inlet system and analyzed by gas chromatography (GC) with flame ionization detection (FID). C2-C7 NMHC, depending on their ambient air mixing ratios, could be measured with accuracy and repeatability errors of generally storage (<10 pptv yr(-1)) of samples in these glass flasks. Results from flask NMHC analyses were compared to in-situ NMHC measurements at the Global Atmospheric Watch station in Hohenpeissenberg, Germany. This 9-months side-by-side comparison showed good agreement between both methods. More than 94% of all data comparisons for C2-C5 alkanes, isoprene, benzene and toluene fell within the combined accuracy and precision objectives of the World Meteorological Organization Global Atmosphere Watch (WMO-GAW) for NMHC measurements.

  8. Decoding of Human Movements Based on Deep Brain Local Field Potentials Using Ensemble Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohammad S. Islam

    2017-01-01

    Full Text Available Decoding neural activities related to voluntary and involuntary movements is fundamental to understanding human brain motor circuits and neuromotor disorders and can lead to the development of neuromotor prosthetic devices for neurorehabilitation. This study explores using recorded deep brain local field potentials (LFPs for robust movement decoding of Parkinson’s disease (PD and Dystonia patients. The LFP data from voluntary movement activities such as left and right hand index finger clicking were recorded from patients who underwent surgeries for implantation of deep brain stimulation electrodes. Movement-related LFP signal features were extracted by computing instantaneous power related to motor response in different neural frequency bands. An innovative neural network ensemble classifier has been proposed and developed for accurate prediction of finger movement and its forthcoming laterality. The ensemble classifier contains three base neural network classifiers, namely, feedforward, radial basis, and probabilistic neural networks. The majority voting rule is used to fuse the decisions of the three base classifiers to generate the final decision of the ensemble classifier. The overall decoding performance reaches a level of agreement (kappa value at about 0.729±0.16 for decoding movement from the resting state and about 0.671±0.14 for decoding left and right visually cued movements.

  9. Hybrid Access Femtocells in Overlaid MIMO Cellular Networks with Transmit Selection under Poisson Field Interference

    KAUST Repository

    Abdel Nabi, Amr A

    2017-09-21

    This paper analyzes the performance of hybrid control-access schemes for small cells (such as femtocells) in the context of two-tier overlaid cellular networks. The proposed hybrid access schemes allow for sharing the same downlink resources between the small-cell network and the original macrocell network, and their mode of operations are characterized considering post-processed signal-to-interference-plus-noise ratios (SINRs) or pre-processed interference-aware operation. The work presents a detailed treatment of achieved performance of a desired user that benefits from MIMO arrays configuration through the use of transmit antenna selection (TAS) and maximal ratio combining (MRC) in the presence of Poisson field interference processes on spatial links. Furthermore, based on the interference awareness at the desired user, two TAS approaches are treated, which are the signal-to-noise (SNR)-based selection and SINR-based selection. The analysis is generalized to address the cases of highly-correlated and un-correlated aggregated interference on different transmit channels. In addition, the effect of delayed TAS due to imperfect feedback and the impact of arbitrary TAS processing are investigated. The analytical results are validated by simulations, to clarify some of the main outcomes herein.

  10. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    Science.gov (United States)

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  11. DeepCotton: in-field cotton segmentation using deep fully convolutional network

    Science.gov (United States)

    Li, Yanan; Cao, Zhiguo; Xiao, Yang; Cremers, Armin B.

    2017-09-01

    Automatic ground-based in-field cotton (IFC) segmentation is a challenging task in precision agriculture, which has not been well addressed. Nearly all the existing methods rely on hand-crafted features. Their limited discriminative power results in unsatisfactory performance. To address this, a coarse-to-fine cotton segmentation method termed "DeepCotton" is proposed. It contains two modules, fully convolutional network (FCN) stream and interference region removal stream. First, FCN is employed to predict initially coarse map in an end-to-end manner. The convolutional networks involved in FCN guarantee powerful feature description capability, simultaneously, the regression analysis ability of neural network assures segmentation accuracy. To our knowledge, we are the first to introduce deep learning to IFC segmentation. Second, our proposed "UP" algorithm composed of unary brightness transformation and pairwise region comparison is used for obtaining interference map, which is executed to refine the coarse map. The experiments on constructed IFC dataset demonstrate that our method outperforms other state-of-the-art approaches, either in different common scenarios or single/multiple plants. More remarkable, the "UP" algorithm greatly improves the property of the coarse result, with the average amplifications of 2.6%, 2.4% on accuracy and 8.1%, 5.5% on intersection over union for common scenarios and multiple plants, separately.

  12. Porosity Estimation By Artificial Neural Networks Inversion . Application to Algerian South Field

    Science.gov (United States)

    Eladj, Said; Aliouane, Leila; Ouadfeul, Sid-Ali

    2017-04-01

    One of the main geophysicist's current challenge is the discovery and the study of stratigraphic traps, this last is a difficult task and requires a very fine analysis of the seismic data. The seismic data inversion allows obtaining lithological and stratigraphic information for the reservoir characterization . However, when solving the inverse problem we encounter difficult problems such as: Non-existence and non-uniqueness of the solution add to this the instability of the processing algorithm. Therefore, uncertainties in the data and the non-linearity of the relationship between the data and the parameters must be taken seriously. In this case, the artificial intelligence techniques such as Artificial Neural Networks(ANN) is used to resolve this ambiguity, this can be done by integrating different physical properties data which requires a supervised learning methods. In this work, we invert the acoustic impedance 3D seismic cube using the colored inversion method, then, the introduction of the acoustic impedance volume resulting from the first step as an input of based model inversion method allows to calculate the Porosity volume using the Multilayer Perceptron Artificial Neural Network. Application to an Algerian South hydrocarbon field clearly demonstrate the power of the proposed processing technique to predict the porosity for seismic data, obtained results can be used for reserves estimation, permeability prediction, recovery factor and reservoir monitoring. Keywords: Artificial Neural Networks, inversion, non-uniqueness , nonlinear, 3D porosity volume, reservoir characterization .

  13. 3-D components of a biological neural network visualized in computer generated imagery. I - Macular receptive field organization

    Science.gov (United States)

    Ross, Muriel D.; Cutler, Lynn; Meyer, Glenn; Lam, Tony; Vaziri, Parshaw

    1990-01-01

    Computer-assisted, 3-dimensional reconstructions of macular receptive fields and of their linkages into a neural network have revealed new information about macular functional organization. Both type I and type II hair cells are included in the receptive fields. The fields are rounded, oblong, or elongated, but gradations between categories are common. Cell polarizations are divergent. Morphologically, each calyx of oblong and elongated fields appears to be an information processing site. Intrinsic modulation of information processing is extensive and varies with the kind of field. Each reconstructed field differs in detail from every other, suggesting that an element of randomness is introduced developmentally and contributes to endorgan adaptability.

  14. PERSONAL NETWORK SAMPLING, OUTDEGREE ANALYSIS AND MULTILEVEL ANALYSIS - INTRODUCING THE NETWORK CONCEPT IN STUDIES OF HIDDEN POPULATIONS

    NARCIS (Netherlands)

    SPREEN, M; ZWAAGSTRA, R

    1994-01-01

    Populations, such as heroin and cocaine users, the homeless and the like (hidden populations), are among the most difficult populations to which to apply classic random sampling procedures. A frequently used data collection method for these hidden populations is the snowball procedure. The

  15. Partitioning of alcohol ethoxylates and polyethylene glycols in the marine environment: Field samplings vs laboratory experiments

    Energy Technology Data Exchange (ETDEWEB)

    Traverso-Soto, Juan M. [Departamento de Química Física, Facultad de Ciencias del Mar y Ambientales, Campus de Excelencia Internacional del Mar (CEI-MAR), Universidad de Cádiz, Campus Río San Pedro s/n, Puerto Real, Cádiz 11510 (Spain); Brownawell, Bruce J. [School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook, NY 11794-5000 (United States); González-Mazo, Eduardo [Departamento de Química Física, Facultad de Ciencias del Mar y Ambientales, Campus de Excelencia Internacional del Mar (CEI-MAR), Universidad de Cádiz, Campus Río San Pedro s/n, Puerto Real, Cádiz 11510 (Spain); Lara-Martín, Pablo A., E-mail: pablo.lara@uca.es [Departamento de Química Física, Facultad de Ciencias del Mar y Ambientales, Campus de Excelencia Internacional del Mar (CEI-MAR), Universidad de Cádiz, Campus Río San Pedro s/n, Puerto Real, Cádiz 11510 (Spain)

    2014-08-15

    Nowadays, alcohol ethoxylates (AEOs) constitute the most important group of non-ionic surfactants, used in a wide range of applications such as household cleaners and detergents. Significant amounts of these compounds and their degradation products (polyethylene glycols, PEGs, which are also used for many other applications) reach aquatic environments, and are eliminated from the water column by degradation and sorption processes. This work deals with the environmental distribution of AEOs and PEGs in the Long Island Sound Estuary, a setting impacted by sewage discharges from New York City (NYC). The distribution of target compounds in seawater was influenced by tides, consistent with salinity differences, and concentrations in suspended solid samples ranged from 1.5 to 20.5 μg/g. The more hydrophobic AEOs were mostly attached to the particulate matter whereas the more polar PEGs were predominant in the dissolved form. Later, the sorption of these chemicals was characterized in the laboratory. Experimental and environmental sorption coefficients for AEOs and PEGs showed average values from 3607 to 164,994 L/kg and from 74 to 32,862 L/kg, respectively. The sorption data were fitted to a Freundlich isotherm model with parameters n and log K{sub F} between 0.8–1.2 and 1.46–4.39 L/kg, respectively. AEO and PEG sorptions on marine sediment were also found to be mostly not affected by changes in salinity. - Highlights: • AEO and PEG levels in estuaries are influenced by tides and suspended solids. • Sediment–water partition coefficients in the lab and in the field are comparable. • Sorption is depending on both hydrophilic and hydrophobic interactions. • Sorption data fits Freundlich isotherms, showing K{sub F} values from 29 to 24,892 L/kg. • Sorption is very weakly influenced by salinity changes.

  16. Does respondent driven sampling alter the social network composition and health-seeking behaviors of illicit drug users followed prospectively?

    Directory of Open Access Journals (Sweden)

    Abby E Rudolph

    2011-05-01

    Full Text Available Respondent driven sampling (RDS was originally developed to sample and provide peer education to injection drug users at risk for HIV. Based on the premise that drug users' social networks were maintained through sharing rituals, this peer-driven approach to disseminate educational information and reduce risk behaviors capitalizes and expands upon the norms that sustain these relationships. Compared with traditional outreach interventions, peer-driven interventions produce greater reductions in HIV risk behaviors and adoption of safer behaviors over time, however, control and intervention groups are not similarly recruited. As peer-recruitment may alter risk networks and individual risk behaviors over time, such comparison studies are unable to isolate the effect of a peer-delivered intervention. This analysis examines whether RDS recruitment (without an intervention is associated with changes in health-seeking behaviors and network composition over 6 months. New York City drug users (N = 618 were recruited using targeted street outreach (TSO and RDS (2006-2009. 329 non-injectors (RDS = 237; TSO = 92 completed baseline and 6-month surveys ascertaining demographic, drug use, and network characteristics. Chi-square and t-tests compared RDS- and TSO-recruited participants on changes in HIV testing and drug treatment utilization and in the proportion of drug using, sex, incarcerated and social support networks over the follow-up period. The sample was 66% male, 24% Hispanic, 69% black, 62% homeless, and the median age was 35. At baseline, the median network size was 3, 86% used crack, 70% used cocaine, 40% used heroin, and in the past 6 months 72% were tested for HIV and 46% were enrolled in drug treatment. There were no significant differences by recruitment strategy with respect to changes in health-seeking behaviors or network composition over 6 months. These findings suggest no association between RDS recruitment and changes in

  17. Electric field mill network products to improve detection of the lightning hazard

    Science.gov (United States)

    Maier, Launa M.

    1987-01-01

    An electric field mill network has been used at Kennedy Space Center for over 10 years as part of the thunderstorm detection system. Several algorithms are currently available to improve the informational output of the electric field mill data. The charge distributions of roughly 50 percent of all lightning can be modeled as if they reduced the charged cloud by a point charge or a point dipole. Using these models, the spatial differences in the lightning induced electric field changes, and a least squares algorithm to obtain an optimum solution, the three-dimensional locations of the lightning charge centers can be located. During the lifetime of a thunderstorm, dynamically induced charging, modeled as a current source, can be located spatially with measurements of Maxwell current density. The electric field mills can be used to calculate the Maxwell current density at times when it is equal to the displacement current density. These improvements will produce more accurate assessments of the potential electrical activity, identify active cells, and forecast thunderstorm termination.

  18. Electric field mill network products to improve detection of the lightning hazard

    Science.gov (United States)

    Maier, Launa M.

    An electric field mill network has been used at Kennedy Space Center for over 10 years as part of the thunderstorm detection system. Several algorithms are currently available to improve the informational output of the electric field mill data. The charge distributions of roughly 50 percent of all lightning can be modeled as if they reduced the charged cloud by a point charge or a point dipole. Using these models, the spatial differences in the lightning induced electric field changes, and a least squares algorithm to obtain an optimum solution, the three-dimensional locations of the lightning charge centers can be located. During the lifetime of a thunderstorm, dynamically induced charging, modeled as a current source, can be located spatially with measurements of Maxwell current density. The electric field mills can be used to calculate the Maxwell current density at times when it is equal to the displacement current density. These improvements will produce more accurate assessments of the potential electrical activity, identify active cells, and forecast thunderstorm termination.

  19. Network-based support vector machine for classification of microarray samples.

    Science.gov (United States)

    Zhu, Yanni; Shen, Xiaotong; Pan, Wei

    2009-01-30

    The importance of network-based approach to identifying biological markers for diagnostic classification and prognostic assessment in the context of microarray data has been increasingly recognized. To our knowledge, there have been few, if any, statistical tools that explicitly incorporate the prior information of gene networks into classifier building. The main idea of this paper is to take full advantage of the biological observation that neighboring genes in a network tend to function together in biological processes and to embed this information into a formal statistical framework. We propose a network-based support vector machine for binary classification problems by constructing a penalty term from the Finfinity-norm being applied to pairwise gene neighbors with the hope to improve predictive performance and gene selection. Simulation studies in both low- and high-dimensional data settings as well as two real microarray applications indicate that the proposed method is able to identify more clinically relevant genes while maintaining a sparse model with either similar or higher prediction accuracy compared with the standard and the L1 penalized support vector machines. The proposed network-based support vector machine has the potential to be a practically useful classification tool for microarrays and other high-dimensional data.

  20. Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images.

    Science.gov (United States)

    van Grinsven, Mark J J P; van Ginneken, Bram; Hoyng, Carel B; Theelen, Thomas; Sanchez, Clara I

    2016-05-01

    Convolutional neural networks (CNNs) are deep learning network architectures that have pushed forward the state-of-the-art in a range of computer vision applications and are increasingly popular in medical image analysis. However, training of CNNs is time-consuming and challenging. In medical image analysis tasks, the majority of training examples are easy to classify and therefore contribute little to the CNN learning process. In this paper, we propose a method to improve and speed-up the CNN training for medical image analysis tasks by dynamically selecting misclassified negative samples during training. Training samples are heuristically sampled based on classification by the current status of the CNN. Weights are assigned to the training samples and informative samples are more likely to be included in the next CNN training iteration. We evaluated and compared our proposed method by training a CNN with (SeS) and without (NSeS) the selective sampling method. We focus on the detection of hemorrhages in color fundus images. A decreased training time from 170 epochs to 60 epochs with an increased performance-on par with two human experts-was achieved with areas under the receiver operating characteristics curve of 0.894 and 0.972 on two data sets. The SeS CNN statistically outperformed the NSeS CNN on an independent test set.

  1. Fracture network, fluid pathways and paleostress at the Tolhuaca geothermal field

    Science.gov (United States)

    Pérez-Flores, Pamela; Veloso, Eugenio; Cembrano, José; Sánchez-Alfaro, Pablo; Lizama, Martín; Arancibia, Gloria

    2017-03-01

    In this study, we examine the fracture network of the Tolhuaca geothermal system located in the Southern Andean volcanic zone that may have acted as a pathway for migration and ascent of deep-seated fluids under the far/local stress field conditions of the area. We collected the orientation, slip-data and mineralogical content of faults and veins recovered on a ca. 1000 m deep borehole (Tol-1) located in the NW-flank of the Tolhuaca volcano. Tol-1 is a non-oriented, vertical borehole that recovered relatively young (50°) dips. The EW-striking veins are compatible with the calculated local stress field whereas NE-striking veins are compatible with the regional stress field, the morphological elongation of volcanic centers, alignments of flank vents and dikes orientation. Our results demonstrate that the paleomagnetic methodology proved to be reliable and it is useful to re-orient vertical boreholes such as Tol-1. Furthermore, our data show that the bulk transpressional regional stress field has local variations to a tensional stress field within the NE-striking fault zone belonging to the Liquiñe-Ofqui Fault System, favoring the activation of both NW- and NE-striking pre-existent discontinuities, especially the latter which are favorably oriented to open under the prevailing stress field. The vertical σ1 and NS-trending subhorizontal σ3 calculated in the TGS promote the activation of EW-striking extensional veins and both NE and NW-striking hybrid faults, constituting a complex fluid pathway geometry of at least one kilometer depth.

  2. Exposure to radio frequency electromagnetic fields from wireless computer networks: duty factors of Wi-Fi devices operating in schools.

    Science.gov (United States)

    Khalid, M; Mee, T; Peyman, A; Addison, D; Calderon, C; Maslanyj, M; Mann, S

    2011-12-01

    The growing use of wireless local area networks (WLAN) in schools has prompted a study to investigate exposure to the radio frequency (RF) electromagnetic fields from Wi-Fi devices. International guidelines on limiting the adverse health effects of RF, such as those of ICNIRP, allow for time-averaging of exposure. Thus, as Wi-Fi signals consist of intermittent bursts of RF energy, it is important to consider the duty factors of devices in assessing the extent of exposure and compliance with guidelines. Using radio packet capture methods, the duty factor of Wi-Fi devices has been assessed in a sample of 6 primary and secondary schools during classroom lessons. For the 146 individual laptops investigated, the range of duty factors was from 0.02 to 0.91%, with a mean of 0.08% (SD 0.10%). The duty factors of access points from 7 networks ranged from 1.0% to 11.7% with a mean of 4.79% (SD 3.76%). Data gathered with transmit time measuring devices attached to laptops also showed similar results. Within the present limited sample, the range of duty factors from laptops and access points were found to be broadly similar for primary and secondary schools. Applying these duty factors to previously published results from this project, the maximum time-averaged power density from a laptop would be 220 μW m(-2), at a distance of 0.5 m and the peak localised SAR predicted in the torso region of a 10 year old child model, at 34 cm from the antenna, would be 80 μW kg(-1). Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  3. Energy-efficient data acquisition by adaptive sampling for wireless sensor networks

    NARCIS (Netherlands)

    Law, Y.W.; Chatterjea, Supriyo; Jin meifang, J.; Hanselmann, Thomas; Palaniswami, Marimuthu; Guizani, M.

    2009-01-01

    Wireless sensor networks (WSNs) are well suited for environment monitoring. However, some highly specialized sensors (e.g. hydrological sensors) have high power demand, and without due care, they can exhaust the battery supply quickly. Taking measurements with this kind of sensors can also overwhelm

  4. Self-Organizing Maps Neural Networks Applied to the Classification of Ethanol Samples According to the Region of Commercialization

    Directory of Open Access Journals (Sweden)

    Aline Regina Walkoff

    2017-10-01

    Full Text Available Physical-chemical analysis data were collected, from 998 ethanol samples of automotive ethanol commercialized in the northern, midwestern and eastern regions of the state of Paraná. The data presented self-organizing maps (SOM neural networks, which classified them according to those regions. The self-organizing maps best configuration had a 45 x 45 topology and 5000 training epochs, with a final learning rate of 6.7x10-4, a final neighborhood relationship of 3x10-2 and a mean quantization error of 2x10-2. This neural network provided a topological map depicting three separated groups, each one corresponding to samples of a same region of commercialization. Four maps of weights, one for each parameter, were presented. The network established the pH was the most important variable for classification and electrical conductivity the least one. The self-organizing maps application allowed the segmentation of alcohol samples, therefore identifying them according to the region of commercialization. DOI: http://dx.doi.org/10.17807/orbital.v9i4.982

  5. A Flexible Terminal Approach to Sampled-Data Exponentially Synchronization of Markovian Neural Networks With Time-Varying Delayed Signals.

    Science.gov (United States)

    Cheng, Jun; Park, Ju H; Karimi, Hamid Reza; Shen, Hao

    2017-08-02

    This paper investigates the problem of sampled-data (SD) exponentially synchronization for a class of Markovian neural networks with time-varying delayed signals. Based on the tunable parameter and convex combination computational method, a new approach named flexible terminal approach is proposed to reduce the conservatism of delay-dependent synchronization criteria. The SD subject to stochastic sampling period is introduced to exhibit the general phenomena of reality. Novel exponential synchronization criterion are derived by utilizing uniform Lyapunov-Krasovskii functional and suitable integral inequality. Finally, numerical examples are provided to show the usefulness and advantages of the proposed design procedure.

  6. A multi-site recycled tire crumb rubber characterization study: recruitment strategy and field sampling approach

    Science.gov (United States)

    Recently, concerns have been raised by the public about the safety of tire crumb rubber infill used in synthetic turf fields. In response, the 2016 Federal Research Action Plan on Recycled Tire Crumb Used on Playing Fields and Playgrounds (FRAP) was developed to examine key envir...

  7. Development of a Wireless Sensor Network for Distributed Measurement of Total Electric Field under HVDC Transmission Lines

    OpenAIRE

    Yong Cui; Jianxun Lv; Haiwen Yuan; Luxing Zhao; Yingyi Liu; Hao Yang

    2014-01-01

    A wireless sensor network-based distributed measurement system is designed for collecting and monitoring the electric field under the high voltage direct current (HVDC) transmission lines. The proposed system architecture is composed of a group of wireless nodes connected with electric field sensors and a base station. The electric field sensor based on Gauss’s law is elaborated and developed. For the design of wireless node, the ARM microprocessor and Zigbee radio frequency module are employ...

  8. Atmospheric carbon diooxide mixing ratios from the NOAA Climate Monitoring and Diagnostics Laboratory cooperative flask sampling network, 1967-1993

    Energy Technology Data Exchange (ETDEWEB)

    Conway, T.J.; Tans, P.P. [National Oceanic and Atmospheric Administration, Boulder, CO (United States); BBoden, T.A. [Oak Ridge National Lab., TN (United States)

    1996-02-01

    This data report documents monthly atmospheric CO{sub 2} mixing ratios and measurements obtained by analyzing individual flask air samples for the NOAA/CMDL global cooperative flask sampling network. Measurements include land-based sampling sites and shipboard measurements covering 14 latitude bands in the Pacific Ocean and South China Sea. Analysis of the NOAA/CMDL flask CO{sub 2} database shows a long-term increase in atmospheric CO{sub 2} mixing ratios since the late 1960s. This report describes how the samples are collected and analyzed and how the data are processed, defines limitations, and restrictions of the data, describes the contents and format of the data files, and provides tabular listings of the monthly carbon dioxide records.

  9. Characterisation of radiation field for irradiation of biological samples at nuclear reactor-comparison of twin detector and recombination methods.

    Science.gov (United States)

    Golnik, N; Gryziński, M A; Kowalska, M; Meronka, K; Tulik, P

    2014-10-01

    Central Laboratory for Radiological Protection is involved in achieving scientific project on biological dosimetry. The project includes irradiation of blood samples in radiation fields of nuclear reactor. A simple facility for irradiation of biological samples has been prepared at horizontal channel of the nuclear reactor MARIA in NCBJ in Poland. The radiation field, composed mainly of gamma radiation and thermal neutrons, has been characterised in terms of tissue kerma using twin-detector technique and recombination chambers. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. The Data Transport Network: A Usenet-Based Approach For Data Retrieval From Remote Field Sites

    Science.gov (United States)

    Valentic, T. A.

    2005-12-01

    The Data Transport Network coordinates the collection of scientific data, instrument telemetry and post-processing for the delivery of real-time results over the Internet from instruments located at remote field sites with limited or unreliable network connections. The system was originally developed in 1999 for the distribution of large data sets collected by the radar, lidars and imagers at the NSF upper atmosphere research facility in Sondrestrom, Greenland. The system helped to mitigate disruptions in network connectivity and optimized transfers over the site's low-bandwidth satellite link. The core idea behind the system is to transfer data files as attachments in Usenet messages. The messages collected by a local news server are periodically transmitted to other servers on the Internet when link conditions permit. If the network goes down, data files continue to be stored locally and the server will periodically attempt to deliver the files for upwards of two weeks. Using this simple approach, the Data Transport Network is able to handle a large number of independent data streams from multiple instruments. Each data stream is posted into a separate news group. There are no limitations to the types of data files that can be sent and the system uses standard Internet protocols for encoding, accessing and transmitting files. A common framework allows for new data collection or processing programs to be easily integrated. The two-way nature of the communications also allows for data to be delivered to the site as well, a feature used for the remote control of instruments. In recent years, the Data Transport Network has been applied to small, low-power embedded systems. Coupled with satellite-based communications systems such as Iridium, these miniature Data Transport servers have found application in a number of remote instrument deployments in the Arctic. SRI's involvement as a team member in Veco Polar Resources, the NSF Office of Polar Programs Arctic

  11. Method Evaluation And Field Sample Measurements For The Rate Of Movement Of The Oxidation Front In Saltstone

    Energy Technology Data Exchange (ETDEWEB)

    Almond, P. M. [Savannah River Site (SRS), Aiken, SC (United States); Kaplan, D. I. [Savannah River Site (SRS), Aiken, SC (United States); Langton, C. A. [Savannah River Site (SRS), Aiken, SC (United States); Stefanko, D. B. [Savannah River Site (SRS), Aiken, SC (United States); Spencer, W. A. [Savannah River Site (SRS), Aiken, SC (United States); Hatfield, A. [Clemson University, Clemson, SC (United States); Arai, Y. [Clemson University, Clemson, SC (United States)

    2012-08-23

    The objective of this work was to develop and evaluate a series of methods and validate their capability to measure differences in oxidized versus reduced saltstone. Validated methods were then applied to samples cured under field conditions to simulate Performance Assessment (PA) needs for the Saltstone Disposal Facility (SDF). Four analytical approaches were evaluated using laboratory-cured saltstone samples. These methods were X-ray absorption spectroscopy (XAS), diffuse reflectance spectroscopy (DRS), chemical redox indicators, and thin-section leaching methods. XAS and thin-section leaching methods were validated as viable methods for studying oxidation movement in saltstone. Each method used samples that were spiked with chromium (Cr) as a tracer for oxidation of the saltstone. The two methods were subsequently applied to field-cured samples containing chromium to characterize the oxidation state of chromium as a function of distance from the exposed air/cementitious material surface.

  12. Flower development as an interplay between dynamical physical fields and genetic networks.

    Directory of Open Access Journals (Sweden)

    Rafael Ángel Barrio

    Full Text Available In this paper we propose a model to describe the mechanisms by which undifferentiated cells attain gene configurations underlying cell fate determination during morphogenesis. Despite the complicated mechanisms that surely intervene in this process, it is clear that the fundamental fact is that cells obtain spatial and temporal information that bias their destiny. Our main hypothesis assumes that there is at least one macroscopic field that breaks the symmetry of space at a given time. This field provides the information required for the process of cell differentiation to occur by being dynamically coupled to a signal transduction mechanism that, in turn, acts directly upon the gene regulatory network (GRN underlying cell-fate decisions within cells. We illustrate and test our proposal with a GRN model grounded on experimental data for cell fate specification during organ formation in early Arabidopsis thaliana flower development. We show that our model is able to recover the multigene configurations characteristic of sepal, petal, stamen and carpel primordial cells arranged in concentric rings, in a similar pattern to that observed during actual floral organ determination. Such pattern is robust to alterations of the model parameters and simulated failures predict altered spatio-temporal patterns that mimic those described for several mutants. Furthermore, simulated alterations in the physical fields predict a pattern equivalent to that found in Lacandonia schismatica, the only flowering species with central stamens surrounded by carpels.

  13. Near-Field Coupling Communication Technology For Human-Area Networking

    Directory of Open Access Journals (Sweden)

    Ryoji Nagai

    2012-12-01

    Full Text Available We propose a human-area networking technology that uses the surface of the human body as a data transmission path and uses near-field coupling TRXs. This technology aims to achieve a "touch and connect" form of communication and a new concept of "touch the world" by using a quasi-electrostatic field signal that propagates along the surface of the human body. This paper explains the principles underlying near-field coupling communication. Special attention has been paid to common-mode noise since our communication system is strongly susceptible to this. We designed and made a common-mode choke coil and a transformer to act as common-mode noise filters to suppress common-mode noise. Moreover, we describe how we evaluated the quality of communication using a phantom model with the same electrical properties as the human body and present the experimental results for the packet error rate (PER as a function of the signal to noise ratio (SNR both with the common-mode choke coil or the transformer and without them. Finally, we found that our system achieved a PER of less than 10-2 in general office rooms using raised floors, which corresponded to the quality of communication demanded by communication services in ordinary office spaces.

  14. Single Nucleotide Polymorphism Genotyping and Distribution of Coxiella burnetii Strains from Field Samples in Belgium

    Science.gov (United States)

    Dal Pozzo, Fabiana; Renaville, Bénédicte; Martinelle, Ludovic; Renaville, Robert; Thys, Christine; Smeets, François; Kirschvink, Nathalie; Grégoire, Fabien; Delooz, Laurent; Czaplicki, Guy

    2015-01-01

    The genotypic characterization of Coxiella burnetii provides useful information about the strains circulating at the farm, region, or country level and may be used to identify the source of infection for animals and humans. The aim of the present study was to investigate the strains of C. burnetii circulating in caprine and bovine Belgian farms using a single nucleotide polymorphism (SNP) technique. Direct genotyping was applied to different samples (bulk tank milk, individual milk, vaginal swab, fetal product, and air sample). Besides the well-known SNP genotypes, unreported ones were found in bovine and caprine samples, increasing the variability of the strains found in the two species in Belgium. Moreover, multiple genotypes were detected contemporarily in caprine farms at different years of sampling and by using different samples. Interestingly, certain SNP genotypes were detected in both bovine and caprine samples, raising the question of interspecies transmission of the pathogen. PMID:26475104

  15. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  16. Pre-Mission Input Requirements to Enable Successful Sample Collection by a Remote Field/EVA Team

    Science.gov (United States)

    Cohen, B. A.; Young, K. E.; Lim, D. S.

    2015-01-01

    This paper is intended to evaluate the sample collection process with respect to sample characterization and decision making. In some cases, it may be sufficient to know whether a given outcrop or hand sample is the same as or different from previous sampling localities or samples. In other cases, it may be important to have more in-depth characterization of the sample, such as basic composition, mineralogy, and petrology, in order to effectively identify the best sample. Contextual field observations, in situ/handheld analysis, and backroom evaluation may all play a role in understanding field lithologies and their importance for return. For example, whether a rock is a breccia or a clast-laden impact melt may be difficult based on a single sample, but becomes clear as exploration of a field site puts it into context. The FINESSE (Field Investigations to Enable Solar System Science and Exploration) team is a new activity focused on a science and exploration field based research program aimed at generating strategic knowledge in preparation for the human and robotic exploration of the Moon, near-Earth asteroids (NEAs) and Phobos and Deimos. We used the FINESSE field excursion to the West Clearwater Lake Impact structure (WCIS) as an opportunity to test factors related to sampling decisions. In contract to other technology-driven NASA analog studies, The FINESSE WCIS activity is science-focused, and moreover, is sampling-focused, with the explicit intent to return the best samples for geochronology studies in the laboratory. This specific objective effectively reduces the number of variables in the goals of the field test and enables a more controlled investigation of the role of the crewmember in selecting samples. We formulated one hypothesis to test: that providing details regarding the analytical fate of the samples (e.g. geochronology, XRF/XRD, etc.) to the crew prior to their traverse will result in samples that are more likely to meet specific analytical

  17. Wireless sensor network deployment for monitoring soil moisture dynamics at the field scale

    Science.gov (United States)

    Majone, B.; Bellin, A.; Filippi, E.; Ioriatti, L.; Martinelli, M.; Massa, A.; Toller, G.

    2009-12-01

    We describe a recent deployment of soil moisture and temperature sensors in an apple tree orchard aimed at exploring the interaction between soil moisture dynamics and plant physiology. The field is divided into three parcels with different constant irrigation rates. The deployment includes dendrometers which monitor the variations of the trunk diameter. The idea is to monitor continuously and at small time steps soil moisture dynamics, soil temperature and a parameter reflecting plant stress at the parcel scale, in order to better investigate the interaction between plant physiology and soil moisture dynamics. Other sensors monitoring plant physiology can be easily accommodated within the Wireless Sensor Network (WSN). The experimental site is an apple orchard of 5000 m2 located at Cles, province of Trento, Italy, at the elevation of 640 m.a.s.l. In this site about 1200 apple trees are cultivated (cultivar Golden Delicious). The trees have been planted in 2004 in north-south rows 3.5 m apart. The deployment consists of 27 locations connected by a multi hop WSN, each one equipped with 5 soil moisture sensors (capacitance sensors EC-5, decagon Service) at the depths of 10, 20, 30, 50 and 80 cm, and a temperature sensor at the depth of 20 cm, for a total of 135 soil moisture and 27 temperature sensors. The proposed monitoring system is based on totally autonomous sensor nodes which allow both real time and historic data management. The data gathered are then organized in a database on a public web site. The node sensors are connected through an input/output interface to a WSN platform. The power supply consists of a solar panel able to provide 250 mA at 7 V and a 3V DC/DC converter based on a dual frequency high efficient switching regulator. The typical meteorological data are monitored with a weather station located at a distance of approximately 100 m from the experimental site. Great care has been posed to calibration of the capacitance sensors both in the

  18. Discrete Network Modeling for Field-Scale Flow and Transport Through Porous Media

    National Research Council Canada - National Science Library

    Howington, Stacy

    1997-01-01

    .... Specifically, a stochastic, high-resolution, discrete network model is developed and explored for simulating macroscopic flow and conservative transport through macroscopic porous media Networks...

  19. Boltzmann sampling for an XY model using a non-degenerate optical parametric oscillator network

    Science.gov (United States)

    Takeda, Y.; Tamate, S.; Yamamoto, Y.; Takesue, H.; Inagaki, T.; Utsunomiya, S.

    2018-01-01

    We present an experimental scheme of implementing multiple spins in a classical XY model using a non-degenerate optical parametric oscillator (NOPO) network. We built an NOPO network to simulate a one-dimensional XY Hamiltonian with 5000 spins and externally controllable effective temperatures. The XY spin variables in our scheme are mapped onto the phases of multiple NOPO pulses in a single ring cavity and interactions between XY spins are implemented by mutual injections between NOPOs. We show the steady-state distribution of optical phases of such NOPO pulses is equivalent to the Boltzmann distribution of the corresponding XY model. Estimated effective temperatures converged to the setting values, and the estimated temperatures and the mean energy exhibited good agreement with the numerical simulations of the Langevin dynamics of NOPO phases.

  20. Measuring complex behaviors of local oscillatory networks in deep brain local field potentials.

    Science.gov (United States)

    Huang, Yongzhi; Geng, Xinyi; Li, Luming; Stein, John F; Aziz, Tipu Z; Green, Alexander L; Wang, Shouyan

    2016-05-01

    Multiple oscillations emerging from the same neuronal substrate serve to construct a local oscillatory network. The network usually exhibits complex behaviors of rhythmic, balancing and coupling between the oscillations, and the quantification of these behaviors would provide valuable insight into organization of the local network related to brain states. An integrated approach to quantify rhythmic, balancing and coupling neural behaviors based upon power spectral analysis, power ratio analysis and cross-frequency power coupling analysis was presented. Deep brain local field potentials (LFPs) were recorded from the thalamus of patients with neuropathic pain and dystonic tremor. t-Test was applied to assess the difference between the two patient groups. The rhythmic behavior measured by power spectral analysis showed significant power spectrum difference in the high beta band between the two patient groups. The balancing behavior measured by power ratio analysis showed significant power ratio differences at high beta band to 8-20 Hz, and 30-40 Hz to high beta band between the patient groups. The coupling behavior measured by cross-frequency power coupling analysis showed power coupling differences at (theta band, high beta band) and (45-55 Hz, 70-80 Hz) between the patient groups. The study provides a strategy for studying the brain states in a multi-dimensional behavior space and a framework to screen quantitative characteristics for biomarkers related to diseases or nuclei. The work provides a comprehensive approach for understanding the complex behaviors of deep brain LFPs and identifying quantitative biomarkers for brain states related to diseases or nuclei. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Nitrate leaching from a potato field using adaptive network-based fuzzy inference system

    DEFF Research Database (Denmark)

    Shekofteh, Hosein; Afyuni, Majid M; Hajabbasi, Mohammad-Ali

    2013-01-01

    The conventional methods of application of nitrogen fertilizers might be responsible for the increased nitrate concentration in groundwater of areas dominated by irrigated agriculture. Appropriate water and nutrient management strategies are required to minimize groundwater pollution...... of nitrate (NO3) leaching from a potato field under a drip fertigation system. In the first part of the study, a two-dimensional solute transport model was used to simulate nitrate leaching from a sandy soil with varying emitter discharge rates and fertilizer doses. The results from the modeling were used...... to train and validate an adaptive network-based fuzzy inference system (ANFIS) in order to estimate nitrate leaching. Two performance functions, namely mean absolute percentage error (MAPE) and correlation coefficient (R), were used to evaluate the adequacy of the ANFIS. Results showed that ANFIS can...

  2. High-conductance states in a mean-field cortical network model

    DEFF Research Database (Denmark)

    Lerchner, Alexander; Ahmadi, Mandana; Hertz, John

    2004-01-01

    Measured responses from visual cortical neurons show that spike times tend to be correlated rather than exactly Poisson distributed. Fano factors vary and are usually greater than 1, indicating a tendency toward spikes being clustered. We show that this behavior emerges naturally in a balanced...... cortical network model with random connectivity and conductance-based synapses. We employ mean-field theory with correctly colored noise to describe temporal correlations in the neuronal activity. Our results illuminate the connection between two independent experimental findings: high-conductance states...... of cortical neurons in their natural environment, and variable non-Poissonian spike statistics with Fano factors greater than 1. (C) 2004 Elsevier B.V. All rights reserved....

  3. Multiobjective Optimization of Evacuation Routes in Stadium Using Superposed Potential Field Network Based ACO

    Directory of Open Access Journals (Sweden)

    Jialiang Kou

    2013-01-01

    Full Text Available Multiobjective evacuation routes optimization problem is defined to find out optimal evacuation routes for a group of evacuees under multiple evacuation objectives. For improving the evacuation efficiency, we abstracted the evacuation zone as a superposed potential field network (SPFN, and we presented SPFN-based ACO algorithm (SPFN-ACO to solve this problem based on the proposed model. In Wuhan Sports Center case, we compared SPFN-ACO algorithm with HMERP-ACO algorithm and traditional ACO algorithm under three evacuation objectives, namely, total evacuation time, total evacuation route length, and cumulative congestion degree. The experimental results show that SPFN-ACO algorithm has a better performance while comparing with HMERP-ACO algorithm and traditional ACO algorithm for solving multi-objective evacuation routes optimization problem.

  4. Field assessment of bacterial communities and total trihalomethanes: Implications for drinking water networks.

    Science.gov (United States)

    Montoya-Pachongo, Carolina; Douterelo, Isabel; Noakes, Catherine; Camargo-Valero, Miller Alonso; Sleigh, Andrew; Escobar-Rivera, Juan-Carlos; Torres-Lozada, Patricia

    2017-11-07

    Operation and maintenance (O&M) of drinking water distribution networks (DWDNs) in tropical countries simultaneously face the control of acute and chronic risks due to the presence of microorganisms and disinfection by-products, respectively. In this study, results from a detailed field characterization of microbiological, chemical and infrastructural parameters of a tropical-climate DWDN are presented. Water physicochemical parameters and the characteristics of the network were assessed to evaluate the relationship between abiotic and microbiological factors and their association with the presence of total trihalomethanes (TTHMs). Illumina sequencing of the bacterial 16s rRNA gene revealed significant differences in the composition of biofilm and planktonic communities. The highly diverse biofilm communities showed the presence of methylotrophic bacteria, which suggest the presence of methyl radicals such as THMs within this habitat. Microbiological parameters correlated with water age, pH, temperature and free residual chlorine. The results from this study are necessary to increase the awareness of O&M practices in DWDNs required to reduce biofilm formation and maintain appropriate microbiological and chemical water quality, in relation to biofilm detachment and DBP formation. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. EARLY DETECTION OF NEAR-FIELD TSUNAMIS USING UNDERWATER SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    L. E. Freitag

    2012-01-01

    Full Text Available We propose a novel approach for near-field tsunami detection, specifically for the area near the city of Padang, Indonesia. Padang is located on the western shore of Sumatra, directly across from the Mentawai segment of the Sunda Trench, where accumulated strain has not been released since the great earthquake of 1797. Consequently, the risk of a major tsunamigenic earthquake on this segment is high. Currently, no ocean-bottom pressure sensors are deployed in the Mentawai basin to provide a definitive tsunami warning for Padang. Timely warnings are essential to initiate evacuation procedures and minimize loss of human life. Our approach augments existing technology with a network of underwater sensors to detect tsunamis generated by an earthquake or landslide fast enough to provide at least 15 minutes of warning. Data from the underwater sensor network would feed into existing decision support systems that accept input from land and sea-based sensors and provide warning information to city and regional authorities.

  6. Modulation of Cortical-subcortical Networks in Parkinson’s Disease by Applied Field Effects

    Directory of Open Access Journals (Sweden)

    Christopher William Hess

    2013-09-01

    Full Text Available Studies suggest that endogenous field effects may play a role in neuronal oscillations and communication. Non-invasive transcranial electrical stimulation with low-intensity currents can also have direct effects on the underlying cortex as well as distant network effects. While Parkinson's disease (PD is amenable to invasive neuromodulation in the basal ganglia by deep brain stimulation, techniques of non-invasive neuromodulation like transcranial direct current stimulation (tDCS and transcranial alternating current stimulation (tACS are being investigated as possible therapies. tDCS and tACS have the potential to influence the abnormal cortical-subcortical network activity that occurs in PD through sub-threshold changes in cortical excitability or through entrainment or disruption of ongoing rhythmic cortical activity. This may allow for the targeting of specific features of the disease involving abnormal oscillatory activity, as well as the enhancement of potential cortical compensation for basal ganglia dysfunction and modulation of cortical plasticity in neurorehabilitation. However, little is currently known about how cortical stimulation will affect subcortical structures, the size of any effect, and the factors of stimulation that will influence these effects.

  7. Continuous assessment of land mapping accuracy at High Resolution from global networks of atmospheric and field observatories -concept and demonstration

    Science.gov (United States)

    Sicard, Pierre; Martin-lauzer, François-regis

    2017-04-01

    In the context of global climate change and adjustment/resilience policies' design and implementation, there is a need not only i. for environmental monitoring, e.g. through a range of Earth Observations (EO) land "products" but ii. for a precise assessment of uncertainties of the aforesaid information that feed environmental decision-making (to be introduced in the EO metadata) and also iii. for a perfect handing of the thresholds which help translate "environment tolerance limits" to match detected EO changes through ecosystem modelling. Uncertainties' insight means precision and accuracy's knowledge and subsequent ability of setting thresholds for change detection systems. Traditionally, the validation of satellite-derived products has taken the form of intensive field campaigns to sanction the introduction of data processors in Payload Data Ground Segments chains. It is marred by logistical challenges and cost issues, reason why it is complemented by specific surveys at ground-based monitoring sites which can provide near-continuous observations at a high temporal resolution (e.g. RadCalNet). Unfortunately, most of the ground-level monitoring sites, in the number of 100th or 1000th, which are part of wider observation networks (e.g. FLUXNET, NEON, IMAGINES) mainly monitor the state of the atmosphere and the radiation exchange at the surface, which are different to the products derived from EO data. In addition they are "point-based" compared to the EO cover to be obtained from Sentinel-2 or Sentinel-3. Yet, data from these networks, processed by spatial extrapolation models, are well-suited to the bottom-up approach and relevant to the validation of vegetation parameters' consistency (e.g. leaf area index, fraction of absorbed photosynthetically active radiation). Consistency means minimal errors on spatial and temporal gradients of EO products. Test of the procedure for land-cover products' consistency assessment with field measurements delivered by worldwide

  8. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... temporal stability. We evaluate the hypothesis that risk preferences are stable over time using a remarkable data set combining administrative information from the Danish registry with longitudinal experimental data we designed to allow better identification of joint selection and attrition effects...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...

  9. Graphene-based field effect transistor in two-dimensional paper networks

    Energy Technology Data Exchange (ETDEWEB)

    Cagang, Aldrine Abenoja; Abidi, Irfan Haider; Tyagi, Abhishek [Department of Chemical and Biomolecular Engineering, Hong Kong University of Science and Technology, Clear Water Bay (Hong Kong); Hu, Jie; Xu, Feng [Bioinspired Engineering and Biomechanics Center (BEBC), Xi' an Jiaotong University, Xi' an 710049 (China); The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi' an Jiaotong University, Xi' an 710049 (China); Lu, Tian Jian [Bioinspired Engineering and Biomechanics Center (BEBC), Xi' an Jiaotong University, Xi' an 710049 (China); Luo, Zhengtang, E-mail: keztluo@ust.hk [Department of Chemical and Biomolecular Engineering, Hong Kong University of Science and Technology, Clear Water Bay (Hong Kong)

    2016-04-21

    We demonstrate the fabrication of a graphene-based field effect transistor (GFET) incorporated in a two-dimensional paper network format (2DPNs). Paper serves as both a gate dielectric and an easy-to-fabricate vessel for holding the solution with the target molecules in question. The choice of paper enables a simpler alternative approach to the construction of a GFET device. The fabricated device is shown to behave similarly to a solution-gated GFET device with electron and hole mobilities of ∼1256 cm{sup 2} V{sup −1} s{sup −1} and ∼2298 cm{sup 2} V{sup −1} s{sup −1} respectively and a Dirac point around ∼1 V. When using solutions of ssDNA and glucose it was found that the added molecules induce negative electrolytic gating effects shifting the conductance minimum to the right, concurrent with increasing carrier concentrations which results to an observed increase in current response correlated to the concentration of the solution used. - Highlights: • A graphene-based field effect transistor sensor was fabricated for two-dimensional paper network formats. • The constructed GFET on 2DPN was shown to behave similarly to solution-gated GFETs. • Electrolyte gating effects have more prominent effect over adsorption effects on the behavior of the device. • The GFET incorporated on 2DPN was shown to yield linear response to presence of glucose and ssDNA soaked inside the paper.

  10. Sampling Migrants from their Social Networks: The Demography and Social Organization of Chinese Migrants in Dar es Salaam, Tanzania.

    Science.gov (United States)

    Merli, M Giovanna; Verdery, Ashton; Mouw, Ted; Li, Jing

    2016-07-01

    The streams of Chinese migration to Africa are growing in tandem with rising Chinese investments and trade flows in and to the African continent. In spite of the high profile of this phenomenon in the media, there are few rich and broad descriptions of Chinese communities in Africa. Reasons for this include the rarity of official statistics on foreign-born populations in African censuses, the absence of predefined sampling frames required to draw representative samples with conventional survey methods and difficulties to reach certain segments of this population. Here, we use a novel network-based approach, Network Sampling with Memory, which overcomes the challenges of sampling 'hidden' populations in the absence of a sampling frame, to recruit a sample of recent Chinese immigrants in Dar es Salaam, Tanzania and collect information on the demographic characteristics, migration histories and social ties of members of this sample. These data reveal a heterogeneous Chinese community composed of "state-led" migrants who come to Africa to work on projects undertaken by large Chinese state-owned enterprises and "independent" migrants who come on their own accord to engage in various types of business ventures. They offer a rich description of the demographic profile and social organization of this community, highlight key differences between the two categories of migrants and map the structure of the social ties linking them. We highlight needs for future research on inter-group differences in individual motivations for migration, economic activities, migration outcomes, expectations about future residence in Africa, social integration and relations with local communities.

  11. Representativeness-based sampling network design for the State of Alaska

    Science.gov (United States)

    Forrest M. Hoffman; Jitendra Kumar; Richard T. Mills; William W. Hargrove

    2013-01-01

    Resource and logistical constraints limit the frequency and extent of environmental observations, particularly in the Arctic, necessitating the development of a systematic sampling strategy to maximize coverage and objectively represent environmental variability at desired scales. A quantitative methodology for stratifying sampling domains, informing site selection,...

  12. Simulation of a Jackson tandem network using state-dependent importance sampling

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, Willem R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rareevent simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jackson twonode tandem queue. It is known that in this setting ‘traditional’ state-independent importance-sampling distributions perform

  13. Simulation of a Jackson tandem network using state-dependent importance sampling

    NARCIS (Netherlands)

    Miretskiy, D.I.; Scheinhardt, W.R.W.; Mandjes, M.R.H.

    2008-01-01

    This paper considers importance sampling as a tool for rare-event simulation. The focus is on estimating the probability of overflow in the downstream queue of a Jackson two-node tandem queue. It is known that in this setting `traditional' state-independent importance-sampling distributions perform

  14. Field sampling and data analysis methods for development of ecological land classifications: an application on the Manistee National Forest.

    Science.gov (United States)

    George E. Host; Carl W. Ramm; Eunice A. Padley; Kurt S. Pregitzer; James B. Hart; David T. Cleland

    1992-01-01

    Presents technical documentation for development of an Ecological Classification System for the Manistee National Forest in northwest Lower Michigan, and suggests procedures applicable to other ecological land classification projects. Includes discussion of sampling design, field data collection, data summarization and analyses, development of classification units,...

  15. Extreme robustness of scaling in sample space reducing processes explains Zipf’s law in diffusion on directed networks

    Science.gov (United States)

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2016-09-01

    It has been shown recently that a specific class of path-dependent stochastic processes, which reduce their sample space as they unfold, lead to exact scaling laws in frequency and rank distributions. Such sample space reducing processes offer an alternative new mechanism to understand the emergence of scaling in countless processes. The corresponding power law exponents were shown to be related to noise levels in the process. Here we show that the emergence of scaling is not limited to the simplest SSRPs, but holds for a huge domain of stochastic processes that are characterised by non-uniform prior distributions. We demonstrate mathematically that in the absence of noise the scaling exponents converge to -1 (Zipf’s law) for almost all prior distributions. As a consequence it becomes possible to fully understand targeted diffusion on weighted directed networks and its associated scaling laws in node visit distributions. The presence of cycles can be properly interpreted as playing the same role as noise in SSRPs and, accordingly, determine the scaling exponents. The result that Zipf’s law emerges as a generic feature of diffusion on networks, regardless of its details, and that the exponent of visiting times is related to the amount of cycles in a network could be relevant for a series of applications in traffic-, transport- and supply chain management.

  16. Classification of wrought aluminum alloys by Artificial Neural Networks evaluation of Laser Induced Breakdown Spectroscopy spectra from aluminum scrap samples

    Science.gov (United States)

    Campanella, B.; Grifoni, E.; Legnaioli, S.; Lorenzetti, G.; Pagnotta, S.; Sorrentino, F.; Palleschi, V.

    2017-08-01

    Every year throughout the world > 50 million vehicles reach the end of their life, producing millions of tons of automotive waste. The current strategies for the separation of the non-ferrous waste fraction, contain mainly aluminum, magnesium, zinc and copper alloys, involve high investment and operational costs, and pose environmental concerns. The European project SHREDDERSORT, in which our research group was actively involved, aimed to overcome this issue by developing a new dry sorting technology for the shredding of non-ferrous automotive wastes. This work represents one step of the complex SHREDDERSORT project, dedicated to the development of a strategy based on Laser Induced Breakdown Spectroscopy (LIBS) for the sorting of light alloys. LIBS was here applied in laboratory for the analysis of stationary aluminum shredder samples. To process the LIBS spectra a methodological approach based on artificial neural networks was used. Although separation could in principle be based on simple emission line ratios, the neural networks approach enables more reproducible results, which can accommodate the unavoidable signal variations due to the low intrinsic reproducibility of the LIBS systems. The neural network separated samples into different clusters and estimates their elemental concentrations.

  17. Selecting Strategies to Reduce High-Risk Unsafe Work Behaviors Using the Safety Behavior Sampling Technique and Bayesian Network Analysis.

    Science.gov (United States)

    Ghasemi, Fakhradin; Kalatpour, Omid; Moghimbeigi, Abbas; Mohammadfam, Iraj

    2017-03-04

    High-risk unsafe behaviors (HRUBs) have been known as the main cause of occupational accidents. Considering the financial and societal costs of accidents and the limitations of available resources, there is an urgent need for managing unsafe behaviors at workplaces. The aim of the present study was to find strategies for decreasing the rate of HRUBs using an integrated approach of safety behavior sampling technique and Bayesian networks analysis. A cross-sectional study. The Bayesian network was constructed using a focus group approach. The required data was collected using the safety behavior sampling, and the parameters of the network were estimated using Expectation-Maximization algorithm. Using sensitivity analysis and belief updating, it was determined that which factors had the highest influences on unsafe behavior. Based on BN analyses, safety training was the most important factor influencing employees' behavior at the workplace. High quality safety training courses can reduce the rate of HRUBs about 10%. Moreover, the rate of HRUBs increased by decreasing the age of employees. The rate of HRUBs was higher in the afternoon and last days of a week. Among the investigated variables, training was the most important factor affecting safety behavior of employees. By holding high quality safety training courses, companies would be able to reduce the rate of HRUBs significantly.

  18. Kinematic and Thermodynamic Study of a Shallow Hailstorm Sampled by the McGill Bistatic Multiple-Doppler Radar Network.

    Science.gov (United States)

    Protat, Alain; Zawadzki, Isztar; Caya, Alain

    2001-05-01

    In this paper, the authors examine the kinematic and thermodynamic characteristics of a shallow hailstorm sampled by the McGill bistatic multiple-Doppler radar network on 26 May 1997. This storm consists of two main shallow convective cells (depth less than 5 km) aligned along a SW-NE convective line propagating to the southeast. The authors also analyze the interactions between the two cells during the life cycle of the convective line. In particular it is shown that dynamic interactions play a major role in the intensification of the second cell. This storm is found to evolve in a manner that shares some characteristics with both multicell and supercell storms. A rotating updraft associated with a mesocyclone develops in the mature stage of the storm, which is characteristic of a supercell. However, the lack of a `vault' structure on the precipitation field, the relatively fast evolution of the cells, and other characteristics detailed henceforth seem to indicate that this storm only shares a few of the typical characteristics of supercells. Some morphological and thermodynamic similarities are found between this storm and recent numerical simulations of shallow supercell storms. While the first cell starts dissipating, a cold downward rear inflow is developing, which resembles the `rear-flank' downdraft documented in several numerical and observational studies of tornadic storms. This downdraft acts to intensify the updraft associated with the second cell and produces a precipitation overhang within which hail eventually forms. When this pocket of hail falls to the ground a bit later, it accelerates the low-level rear inflow that progressively cuts off the inflow ahead of the storm, leading to the progressive dissipation of the second cell.The physical processes involved in the evolution of rotation at low levels to midlevels within this storm are evaluated using the vorticity equation. It is shown that the time tendency of the positive and negative vertical

  19. Networked web-cameras monitor congruent seasonal development of birches with phenological field observations

    Science.gov (United States)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Kubin, Eero; Linkosalmi, Maiju; Melih Tanis, Cemal; Nadir Arslan, Ali

    2017-04-01

    Ecosystems' potential to provide services, e.g. to sequester carbon is largely driven by the phenological cycle of vegetation. Timing of phenological events is required for understanding and predicting the influence of climate change on ecosystems and to support various analyses of ecosystem functioning. We established a network of cameras for automated monitoring of phenological activity of vegetation in boreal ecosystems of Finland. Cameras were mounted on 14 sites, each site having 1-3 cameras. In this study, we used cameras at 11 of these sites to investigate how well networked cameras detect phenological development of birches (Betula spp.) along the latitudinal gradient. Birches are interesting focal species for the analyses as they are common throughout Finland. In our cameras they often appear in smaller quantities within dominant species in the images. Here, we tested whether small scattered birch image elements allow reliable extraction of color indices and changes therein. We compared automatically derived phenological dates from these birch image elements to visually determined dates from the same image time series, and to independent observations recorded in the phenological monitoring network from the same region. Automatically extracted season start dates based on the change of green color fraction in the spring corresponded well with the visually interpreted start of season, and field observed budburst dates. During the declining season, red color fraction turned out to be superior over green color based indices in predicting leaf yellowing and fall. The latitudinal gradients derived using automated phenological date extraction corresponded well with gradients based on phenological field observations from the same region. We conclude that already small and scattered birch image elements allow reliable extraction of key phenological dates for birch species. Devising cameras for species specific analyses of phenological timing will be useful for

  20. An Energy Efficient Adaptive Sampling Algorithm in a Sensor Network for Automated Water Quality Monitoring.

    Science.gov (United States)

    Shu, Tongxin; Xia, Min; Chen, Jiahong; Silva, Clarence de

    2017-11-05

    Power management is crucial in the monitoring of a remote environment, especially when long-term monitoring is needed. Renewable energy sources such as solar and wind may be harvested to sustain a monitoring system. However, without proper power management, equipment within the monitoring system may become nonfunctional and, as a consequence, the data or events captured during the monitoring process will become inaccurate as well. This paper develops and applies a novel adaptive sampling algorithm for power management in the automated monitoring of the quality of water in an extensive and remote aquatic environment. Based on the data collected on line using sensor nodes, a data-driven adaptive sampling algorithm (DDASA) is developed for improving the power efficiency while ensuring the accuracy of sampled data. The developed algorithm is evaluated using two distinct key parameters, which are dissolved oxygen (DO) and turbidity. It is found that by dynamically changing the sampling frequency, the battery lifetime can be effectively prolonged while maintaining a required level of sampling accuracy. According to the simulation results, compared to a fixed sampling rate, approximately 30.66% of the battery energy can be saved for three months of continuous water quality monitoring. Using the same dataset to compare with a traditional adaptive sampling algorithm (ASA), while achieving around the same Normalized Mean Error (NME), DDASA is superior in saving 5.31% more battery energy.

  1. Adaptive Sampling-Based Information Collection for Wireless Body Area Networks

    Directory of Open Access Journals (Sweden)

    Xiaobin Xu

    2016-08-01

    Full Text Available To collect important health information, WBAN applications typically sense data at a high frequency. However, limited by the quality of wireless link, the uploading of sensed data has an upper frequency. To reduce upload frequency, most of the existing WBAN data collection approaches collect data with a tolerable error. These approaches can guarantee precision of the collected data, but they are not able to ensure that the upload frequency is within the upper frequency. Some traditional sampling based approaches can control upload frequency directly, however, they usually have a high loss of information. Since the core task of WBAN applications is to collect health information, this paper aims to collect optimized information under the limitation of upload frequency. The importance of sensed data is defined according to information theory for the first time. Information-aware adaptive sampling is proposed to collect uniformly distributed data. Then we propose Adaptive Sampling-based Information Collection (ASIC which consists of two algorithms. An adaptive sampling probability algorithm is proposed to compute sampling probabilities of different sensed values. A multiple uniform sampling algorithm provides uniform samplings for values in different intervals. Experiments based on a real dataset show that the proposed approach has higher performance in terms of data coverage and information quantity. The parameter analysis shows the optimized parameter settings and the discussion shows the underlying reason of high performance in the proposed approach.

  2. A novel low-E field coil to minimize heating of biological samples in solid-state multinuclear NMR experiments

    Science.gov (United States)

    Dillmann, Baudouin; Elbayed, Karim; Zeiger, Heinz; Weingertner, Marie-Catherine; Piotto, Martial; Engelke, Frank

    2007-07-01

    A novel coil, called Z coil, is presented. Its function is to reduce the strong thermal effects produced by rf heating at high frequencies. The results obtained at 500 MHz in a 50 μl sample prove that the Z coil can cope with salt concentrations that are one order of magnitude higher than in traditional solenoidal coils. The evaluation of the rf field is performed by numerical analysis based on first principles and by carrying out rf field measurements. Reduction of rf heating is probed with a DMPC/DHPC membrane prepared in buffers of increasing salt concentrations. The intricate correlation that exists between the magnetic and electric field is presented. It is demonstrated that, in a multiply tuned traditional MAS coil, the rf electric field E1 cannot be reduced without altering the rf magnetic field. Since the detailed distribution differs when changing the coil geometry, a comparison involving the following three distinct designs is discussed: (1) a regular coil of 5.5 turns, (2) a variable pitch coil with the same number of turns, (3) the new Z coil structure. For each of these coils loaded with samples of different salt concentrations, the nutation fields obtained at a certain power level provide a basis to discuss the impact of the dielectric and conductive losses on the rf efficiency.

  3. A novel low-E field coil to minimize heating of biological samples in solid-state multinuclear NMR experiments.

    Science.gov (United States)

    Dillmann, Baudouin; Elbayed, Karim; Zeiger, Heinz; Weingertner, Marie-Catherine; Piotto, Martial; Engelke, Frank

    2007-07-01

    A novel coil, called Z coil, is presented. Its function is to reduce the strong thermal effects produced by rf heating at high frequencies. The results obtained at 500MHz in a 50 microl sample prove that the Z coil can cope with salt concentrations that are one order of magnitude higher than in traditional solenoidal coils. The evaluation of the rf field is performed by numerical analysis based on first principles and by carrying out rf field measurements. Reduction of rf heating is probed with a DMPC/DHPC membrane prepared in buffers of increasing salt concentrations. The intricate correlation that exists between the magnetic and electric field is presented. It is demonstrated that, in a multiply tuned traditional MAS coil, the rf electric field E(1) cannot be reduced without altering the rf magnetic field. Since the detailed distribution differs when changing the coil geometry, a comparison involving the following three distinct designs is discussed: (1) a regular coil of 5.5 turns, (2) a variable pitch coil with the same number of turns, (3) the new Z coil structure. For each of these coils loaded with samples of different salt concentrations, the nutation fields obtained at a certain power level provide a basis to discuss the impact of the dielectric and conductive losses on the rf efficiency.

  4. Field Methods and Sample Collection Techniques for the Surveillance of West Nile Virus in Avian Hosts.

    Science.gov (United States)

    Wheeler, Sarah S; Boyce, Walter M; Reisen, William K

    2016-01-01

    Avian hosts play an important role in the spread, maintenance, and amplification of West Nile virus (WNV). Avian susceptibility to WNV varies from species to species thus surveillance efforts can focus both on birds that survive infection and those that succumb. Here we describe methods for the collection and sampling of live birds for WNV antibodies or viremia, and methods for the sampling of dead birds. Target species and study design considerations are discussed.

  5. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    subjects in high foot traffic environments, such as mass transit systems, stadiums , and large public events. In order to handle a potentially constant...in [4], however, a key difference in this work is the sampling scheme. As will be discussed, the presented design samples the scene on a uniform phase...elementary unit of the design is the Boundary Array (BA) [6], a sparse array topology first used in ultrasonic sensing. This design employs four linear

  6. First field trial of Virtual Network Operator oriented network on demand (NoD) service provisioning over software defined multi-vendor OTN networks

    Science.gov (United States)

    Li, Yajie; Zhao, Yongli; Zhang, Jie; Yu, Xiaosong; Chen, Haoran; Zhu, Ruijie; Zhou, Quanwei; Yu, Chenbei; Cui, Rui

    2017-01-01

    A Virtual Network Operator (VNO) is a provider and reseller of network services from other telecommunications suppliers. These network providers are categorized as virtual because they do not own the underlying telecommunication infrastructure. In terms of business operation, VNO can provide customers with personalized services by leasing network infrastructure from traditional network providers. The unique business modes of VNO lead to the emergence of network on demand (NoD) services. The conventional network provisioning involves a series of manual operation and configuration, which leads to high cost in time. Considering the advantages of Software Defined Networking (SDN), this paper proposes a novel NoD service provisioning solution to satisfy the private network need of VNOs. The solution is first verified in the real software defined multi-domain optical networks with multi-vendor OTN equipment. With the proposed solution, NoD service can be deployed via online web portals in near-real time. It reinvents the customer experience and redefines how network services are delivered to customers via an online self-service portal. Ultimately, this means a customer will be able to simply go online, click a few buttons and have new services almost instantaneously.

  7. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network

    Science.gov (United States)

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-01

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  8. Optimization of potential field method parameters through networks for swarm cooperative manipulation tasks

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-10-01

    Full Text Available An interesting current research field related to autonomous robots is mobile manipulation performed by cooperating robots (in terrestrial, aerial and underwater environments. Focusing on the underwater scenario, cooperative manipulation of Intervention-Autonomous Underwater Vehicles (I-AUVs is a complex and difficult application compared with the terrestrial or aerial ones because of many technical issues, such as underwater localization and limited communication. A decentralized approach for cooperative mobile manipulation of I-AUVs based on Artificial Neural Networks (ANNs is proposed in this article. This strategy exploits the potential field method; a multi-layer control structure is developed to manage the coordination of the swarm, the guidance and navigation of I-AUVs and the manipulation task. In the article, this new strategy has been implemented in the simulation environment, simulating the transportation of an object. This object is moved along a desired trajectory in an unknown environment and it is transported by four underwater mobile robots, each one provided with a seven-degrees-of-freedom robotic arm. The simulation results are optimized thanks to the ANNs used for the potentials tuning.

  9. Localization and Classification of Paddy Field Pests using a Saliency Map and Deep Convolutional Neural Network.

    Science.gov (United States)

    Liu, Ziyi; Gao, Junfeng; Yang, Guoguo; Zhang, Huan; He, Yong

    2016-02-11

    We present a pipeline for the visual localization and classification of agricultural pest insects by computing a saliency map and applying deep convolutional neural network (DCNN) learning. First, we used a global contrast region-based approach to compute a saliency map for localizing pest insect objects. Bounding squares containing targets were then extracted, resized to a fixed size, and used to construct a large standard database called Pest ID. This database was then utilized for self-learning of local image features which were, in turn, used for classification by DCNN. DCNN learning optimized the critical parameters, including size, number and convolutional stride of local receptive fields, dropout ratio and the final loss function. To demonstrate the practical utility of using DCNN, we explored different architectures by shrinking depth and width, and found effective sizes that can act as alternatives for practical applications. On the test set of paddy field images, our architectures achieved a mean Accuracy Precision (mAP) of 0.951, a significant improvement over previous methods.

  10. Selectivity and limitations of carbon sorption tubes for capturing siloxanes in biogas during field sampling.

    Science.gov (United States)

    Tansel, Berrin; Surita, Sharon C

    2016-06-01

    Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes

  11. Minimal BRDF Sampling for Two-Shot Near-Field Reflectance Acquisition

    DEFF Research Database (Denmark)

    Xu, Zexiang; Nielsen, Jannik Boll; Yu, Jiyang

    2016-01-01

    the condition-number alone performs poorly. We demonstrate practical near-field acquisition of BRDFs from only one or two input images. Our framework generalizes to configurations like a fixed camera setup, where we also develop a simple extension to spatially-varying BRDFs by clustering the materials....

  12. Representing the light field in finite three-dimensional spaces from sparse discrete samples

    NARCIS (Netherlands)

    Mury, A.A.; Pont, S.C.; Koenderink, J.J.

    2009-01-01

    We present a method for measurement and reconstruction of light fields in finite spaces. Using a custom-made device called a plenopter, we can measure spatially and directionally varying radiance distribution functions from a real-world scene up to their second-order spherical harmonics

  13. Impact of implementing ISO 9001:2008 standard on the Spanish Renal Research Network biobank sample transfer process.

    Science.gov (United States)

    Cortés, M Alicia; Irrazábal, Emanuel; García-Jerez, Andrea; Bohórquez-Magro, Lourdes; Luengo, Alicia; Ortiz-Arduán, Alberto; Calleros, Laura; Rodríguez-Puyol, Manuel

    2014-01-01

    Biobank certification ISO 9001:2008 aims to improve the management of processes performed. This has two objectives: customer satisfaction and continuous improvement. This paper presents the impact of certification ISO 9001:2008 on the sample transfer process in a Spanish biobank specialising in kidney patient samples. The biobank experienced a large increase in the number of samples between 2009 (12,582 vials) and 2010 (37,042 vials). The biobank of the Spanish Renal Research Network (REDinREN), located at the University of Alcalá, has implemented ISO standard 9001:2008 for the effective management of human material given to research centres. Using surveys, we analysed two periods in the “sample transfer” process. During the first period between 1-10-12 and 26-11-12 (8 weeks), minimal changes were made to correct isolated errors. In the second period, between 7-01-13 and 18-02-13 (6 weeks), we carried out general corrective actions. The identification of problems and implementation of corrective actions for certification allowed: a 70% reduction in the process execution time, a significant increase (200%) in the number of samples processed and a 25% improvement in the process. The increase in the number of samples processed was directly related to process improvement. The certification of ISO standard 9001:2008, obtained in July 2013, allowed an improvement of the REDinREN biobank processes to be achieved, which increased quality and customer satisfaction.

  14. Sediment and radionuclide transport in rivers. Phase I: field sampling program during mean flow Cattaraugus and Buttermilk Creeks, New York

    Energy Technology Data Exchange (ETDEWEB)

    Ecker, R.M.; Onishi, Y.

    1979-08-01

    A field sampling program was conducted on Cattaraugus and Buttermilk Creeks, New York during November and December 1977 to investigate the transport of radionuclides in surface waters as part of a continuing program to provide data for application and verification of Pacific Northwest Laboratory's (PNL) sediment and radionuclide transport model, SERATRA. Suspended sediment, bed sediment, and water samples were collected during mean flow conditions over a 45 mile reach of stream channel. Radiological analysis of these samples included primarily gamma ray emitters; however, some plutonium, strontium, curium, and tritium analyses were also included. The principal gamma emitter found during the sampling program was /sup 137/Cs where, in some cases, levels associated with the sand and clay size fractions of bed sediment exceeded 100 pCi/g. Elevated levels of /sup 137/Cs and /sup 90/Sr were found downstream of the Nuclear Fuel Services Center, an inactive plutonium reprocessing plant and low level nuclear waste disposal site. Based on radionuclide levels in upstream control stations, /sup 137/Cs was the only radionuclide whose levels in the creeks downstream of the site could confidently be attributed to the site during this sampling program. This field sampling effort is the first of a three phase program to collect data during low, medium and high flow conditions.

  15. Field Sampling Plan for Closure of the Central Facilities Area Sewage Treatment Plant Lagoon 3 and Land Application Area

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Michael George [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-10-01

    This field sampling plan describes sampling of the soil/liner of Lagoon 3 at the Central Facilities Area Sewage Treatment Plant. The lagoon is to be closed, and samples obtained from the soil/liner will provide information to determine if Lagoon 3 and the land application area can be closed in a manner that renders it safe to human health and the environment. Samples collected under this field sampling plan will be compared to Idaho National Laboratory background soil concentrations. If the concentrations of constituents of concern exceed the background level, they will be compared to Comprehensive Environmental Response, Compensation, and Liability Act preliminary remediation goals and Resource Conservation and Recovery Act levels. If the concentrations of constituents of concern are lower than the background levels, Resource Conservation and Recovery Act levels, or the preliminary remediation goals, then Lagoon 3 and the land application area will be closed. If the Resource Conservation and Recovery Act levels and/or the Comprehensive Environmental Response, Compensation, and Liability Act preliminary remediation goals are exceeded, additional sampling and action may be required.

  16. Node-to-node field calibration of wireless distributed air pollution sensor network.

    Science.gov (United States)

    Kizel, Fadi; Etzion, Yael; Shafran-Nathan, Rakefet; Levy, Ilan; Fishbain, Barak; Bartonova, Alena; Broday, David M

    2018-02-01

    Low-cost air quality sensors offer high-resolution spatiotemporal measurements that can be used for air resources management and exposure estimation. Yet, such sensors require frequent calibration to provide reliable data, since even after a laboratory calibration they might not report correct values when they are deployed in the field, due to interference with other pollutants, as a result of sensitivity to environmental conditions and due to sensor aging and drift. Field calibration has been suggested as a means for overcoming these limitations, with the common strategy involving periodical collocations of the sensors at an air quality monitoring station. However, the cost and complexity involved in relocating numerous sensor nodes back and forth, and the loss of data during the repeated calibration periods make this strategy inefficient. This work examines an alternative approach, a node-to-node (N2N) calibration, where only one sensor in each chain is directly calibrated against the reference measurements and the rest of the sensors are calibrated sequentially one against the other while they are deployed and collocated in pairs. The calibration can be performed multiple times as a routine procedure. This procedure minimizes the total number of sensor relocations, and enables calibration while simultaneously collecting data at the deployment sites. We studied N2N chain calibration and the propagation of the calibration error analytically, computationally and experimentally. The in-situ N2N calibration is shown to be generic and applicable for different pollutants, sensing technologies, sensor platforms, chain lengths, and sensor order within the chain. In particular, we show that chain calibration of three nodes, each calibrated for a week, propagate calibration errors that are similar to those found in direct field calibration. Hence, N2N calibration is shown to be suitable for calibration of distributed sensor networks. Copyright © 2017 Elsevier Ltd. All

  17. Get the most out of blow hormones: validation of sampling materials, field storage and extraction techniques for whale respiratory vapour samples.

    Science.gov (United States)

    Burgess, Elizabeth A; Hunt, Kathleen E; Kraus, Scott D; Rolland, Rosalind M

    2016-01-01

    Studies are progressively showing that vital physiological data may be contained in the respiratory vapour (blow) of cetaceans. Nonetheless, fundamental methodological issues need to be addressed before hormone analysis of blow can become a reliable technique. In this study, we performed controlled experiments in a laboratory setting, using known doses of pure parent hormones, to validate several technical factors that may play a crucial role in hormone analyses. We evaluated the following factors: (i) practical field storage of samples on small boats during daylong trips; (ii) efficiency of hormone extraction methods; and (iii) assay interference of different sampler types (i.e. veil nylon, nitex nylon mesh and polystyrene dish). Sampling materials were dosed with mock blow samples of known mixed hormone concentrations (progesterone, 17β-estradiol, testosterone, cortisol, aldosterone and triiodothyronine), designed to mimic endocrine profiles characteristic of pregnant females, adult males, an adrenal glucocorticoid response or a zero-hormone control (distilled H 2 O). Results showed that storage of samples in a cooler on ice preserved hormone integrity for at least 6 h ( P  = 0.18). All sampling materials and extraction methods yielded the correct relative patterns for all six hormones. However, veil and nitex mesh produced detectable assay interference (mean 0.22 ± 0.04 and 0.18 ± 0.03 ng/ml, respectively), possibly caused by some nylon-based component affecting antibody binding. Polystyrene dishes were the most efficacious sampler for accuracy and precision ( P  blow.

  18. Methodologies for measurement of transuranic elements in environmental samples and migration behavior of transuranic elements in paddy fields

    Energy Technology Data Exchange (ETDEWEB)

    Yamamoto, Masayoshi; Ueno, Kaori [Kanazawa Univ., Tatsunokuchi, Ishikawa (Japan). Low Level Radioactivity Lab.; Amano, Hikaru

    1996-02-01

    Methodologies for the measurement of transuranic elements in environmental samples and migration behavior of transuranic elements in paddy fields are reviewed in this report. Long lived transuranic elements in our environment are quite important, because their effect to human is prolonged. Migration analysis of long lived transuranic elements in paddy fields is also quite important, because rice is our main dishes. For the methodologies about the measurement of transuranic elements in environmental samples, traditional chemical separation and purification plus alpha-ray spectrometric methods are reviewed with mass spectrometric method. For the estimation of migration behavior of transuranic elements in paddy fields, experimental results from 1957y to 1989y in Japan are reviewed. Important findings are as follows. (1) Detection limit of transuranic elements for traditional chemical separation and purification plus alpha-ray spectrometric methods is about 0.2mBq/sample(10,000min counting). On contrast, detection limit of transuranic elements for mass spectrometric methods using High Resolution ICP-MS is 0.02mBq/sample for {sup 237}Np. (2) Integrated deposits of {sup 239,240}Pu and {sup 137}Cs in paddy field soils are 2-3 times higher in Pacific ocean side area than Japan sea side area in Japan. (3) Apparent residence time of {sup 237}Np in paddy field soils was estimated to be in the range of 50-70 years, which is shorter than those of {sup 239,240}Pu and {sup 137}Cs(100-140 years). (author) 54 refs.

  19. Experimental investigation of the acoustic anisotropy field in the sample with a stress concentrator

    Directory of Open Access Journals (Sweden)

    Aleksey I. Grishchenko

    2017-03-01

    Full Text Available The behavior of acoustic anisotropy and the longitudinal wave velocity in the case of multiaxial stress-strain state of the plate under inelastic deformation has been studied experimentally. The plate had a stress concentrator in the form of the central hole. The results for several deformation levels, and the results of finite element analysis of active stresses were presented. The qualitative agreement between the calculated stress fields and the distribution fields of acoustic anisotropy was revealed. It was found that the absolute magnitude maximum of acoustic anisotropy fell on the areas with the biggest stresses near the concentrator. It was supposed that the non-uniform distribution of acoustic anisotropy in the material testified to a possible stress concentration at the corresponding points.

  20. Sub-microanalysis of solid samples with near-field enhanced atomic emission spectroscopy

    Science.gov (United States)

    Wang, Xiaohua; Liang, Zhisen; Meng, Yifan; Wang, Tongtong; Hang, Wei; Huang, Benli

    2018-03-01

    A novel approach, which we have chosen to name it as near-field enhanced atomic emission spectroscopy (NFE-AES), was proposed by introducing a scanning tunnelling microscope (STM) system into a laser-induced breakdown spectrometry (LIBS). The near-field enhancement of a laser-illuminated tip was utilized to improve the lateral resolution tremendously. Using the hybrid arrangement, pure metal tablets were analyzed to verify the performance of NFE-AES both in atmosphere and in vacuum. Due to localized surface plasmon resonance (LSPR), the incident electromagnetic field is enhanced and confined at the apex of tip, resulting in sub-micron scale ablation and elemental emission signal. We discovered that the signal-to-noise ratio (SNR) and the spectral resolution obtained in vacuum condition are better than those acquired in atmospheric condition. The quantitative capability of NFE-AES was demonstrated by analyzing Al and Pb in Cu matrix, respectively. Submicron-sized ablation craters were achieved by performing NFE-AES on a Si wafer with an Al film, and the spectroscopic information from a crater of 650 nm diameter was successfully obtained. Due to its advantage of high lateral resolution, NFE-AES imaging of micro-patterned Al lines on an integrated circuit of a SIM card was demonstrated with a sub-micron lateral resolution. These results reveal the potential of the NFE-AES technique in sub-microanalysis of solids, opening an opportunity to map chemical composition at sub-micron scale.

  1. Modular Serial Flow Through device for pulsed electric field treatment of the liquid samples.

    Science.gov (United States)

    Kandušer, Maša; Belič, Aleš; Čorović, Selma; Škrjanc, Igor

    2017-08-14

    In biotechnology, medicine, and food processing, simple and reliable methods for cell membrane permeabilization are required for drug/gene delivery into the cells or for the inactivation of undesired microorganisms. Pulsed electric field treatment is among the most promising methods enabling both aims. The drawback in current technology is controllable large volume operation. To address this challenge, we have developed an experimental setup for flow through electroporation with online regulation of the flow rate with feedback control. We have designed a modular serial flow-through co-linear chamber with a smooth inner surface, the uniform cross-section geometry through the majority of the system's length, and the mesh in contact with the electrodes, which provides uniform electric field distribution and fluid velocity equilibration. The cylindrical cross-section of the chamber prevents arching at the active treatment region. We used mathematical modeling for the evaluation of electric field distribution and the flow profile in the active region. The system was tested for the inactivation of Escherichia coli. We compared two flow-through chambers and used a static chamber as a reference. The experiments were performed under identical experimental condition (product and similar process parameters). The data were analyzed in terms of inactivation efficiency and specific energy consumption.

  2. Numerical assessment of low-frequency dosimetry from sampled magnetic fields.

    Science.gov (United States)

    Freschi, Fabio; Giaccone, Luca; Cirimele, Vincenzo; Canova, Aldo

    2017-11-08

    Low-frequency dosimetry is commonly assessed by evaluating the electric field in the human body using the scalar potential finite dif- ference method. This method is effective only when the sources of the magnetic field are completely known and the magnetic vector po- tential can be analytically computed. The aim of the paper is to present a rigorous method to characterize the source term when only the magnetic flux density is available at discrete points, e.g. in case of field measurements. The method is based on the solution of the discrete magnetic curl equation. The system is restricted to the in- dependent set of magnetic fluxes and circulations of magnetic vec- tor potential using the topological information of the computational mesh. The solenoidality of the magnetic flux density is preserved using a divergence-free interpolator based on vector radial basis functions. The analysis of a benchmark problem shows that the complexity of the proposed algorithm is linearly dependent on the number of elements with a controllable accuracy. The method proposed in this paper also proves to be useful and effective when applied to a real world scenario, where the magnetic flux density is measured in proximity of a power transformer. A 8-million voxel body model is then used for the nu- merical dosimetric analysis. The complete assessment is completed in less than 5 minutes, that is more than acceptable for these problems. © 2017 Institute of Physics and Engineering in Medicine.

  3. Field results for line intersect distance sampling of coarse woody debris

    Science.gov (United States)

    David L. R. Affleck

    2009-01-01

    A growing recognition of the importance of downed woody materials in forest ecosystem processes and global carbon budgets has sharpened the need for efficient sampling strategies that target this resource. Often the aggregate volume, biomass, or carbon content of the downed wood is of primary interest, making recently developed probability proportional-to-volume...

  4. DSM-5 field trials in the United States and Canada, Part I: study design, sampling strategy, implementation, and analytic approaches.

    Science.gov (United States)

    Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C

    2013-01-01

    This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.

  5. Social networks of men who have sex with men: a study of recruitment chains using Respondent Driven Sampling in Salvador, Bahia State, Brazil

    Directory of Open Access Journals (Sweden)

    Sandra Mara Silva Brignol

    2015-11-01

    Full Text Available Abstract Social and sexual contact networks between men who have sex with men (MSM play an important role in understanding the transmission of HIV and other sexually transmitted infections (STIs. In Salvador (Bahia State, Brazil, one of the cities in the survey Behavior, Attitudes, Practices, and Prevalence of HIV and Syphilis among Men Who Have Sex with Men in 10 Brazilian Cities, data were collected in 2008/2009 from a sample of 383 MSM using Respondent Driven Sampling (RDS. Network analysis was used to study friendship networks and sexual partner networks. The study also focused on the association between the number of links (degree and the number of sexual partners, in addition to socio-demographic characteristics. The networks’ structure potentially facilitates HIV transmission. However, the same networks can also be used to spread messages on STI/HIV prevention, since the proximity and similarity of MSM in these networks can encourage behavior change and positive attitudes towards prevention.

  6. Functional approximations to posterior densities: a neural network approach to efficient sampling

    NARCIS (Netherlands)

    L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)

    2002-01-01

    textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate

  7. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2016-02-01

    Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks

  8. A Simplified Multiband Sampling and Detection Method Based on MWC Structure for Mm Wave Communications in 5G Wireless Networks

    Directory of Open Access Journals (Sweden)

    Min Jia

    2015-01-01

    Full Text Available The millimeter wave (mm wave communications have been proposed to be an important part of the 5G mobile communication networks, and it will bring more difficulties to signal processing, especially signal sampling, and also cause more pressures on hardware devices. In this paper, we present a simplified sampling and detection method based on MWC structure by using the idea of blind source separation for mm wave communications, which can avoid the challenges of signal sampling brought by high frequencies and wide bandwidth for mm wave systems. This proposed method takes full advantage of the beneficial spectrum aliasing to achieve signal sampling at sub-Nyquist rate. Compared with the traditional MWC system, it provides the exact quantity of sampling channels which is far lower than that of MWC. In the reconstruction stage, the proposed method simplifies the computational complexity by exploiting simple linear operations instead of CS recovery algorithms and provides more stable performance of signal recovery. Moreover, MWC structure has the ability to apply to different bands used in mm wave communications by mixed processing, which is similar to spread spectrum technology.

  9. Polymorphism in the Plasmodium vivax msp 3: gene in field samples from Tierralta, Colombia.

    Science.gov (United States)

    Cristiano, Fabio Aníbal; Pérez, Manuel Alberto; Nicholls, Rubén Santiago; Guerra, Angela Patricia

    2008-08-01

    We evaluated the Plasmodium vivax polymorphism by studying the Pvmsp-3alpha gene's polymorphic region by PCR-RFLP in 55 samples from patients living in Tierralta, Colombia. Three different sizes of the Pvmsp-3 alpha gene were found, type A (1,900 bp), type B (1,500 bp) and type C (1,100 bp); most of the samples were type A (96.4 %). The Pvmsp-3alpha gene exhibited high polymorphism. Seven restriction patterns were found when using Alu I, and nine were found with Hha I; 12 different alleles were obtained when these patterns were combined. The findings suggest that this gene could be used in Colombia as a molecular epidemiologic marker for genotyping P. vivax.

  10. Laboratory and field testing of bednet traps for mosquito (Diptera: Culicidae) sampling in West Java, Indonesia.

    Science.gov (United States)

    Stoops, Craig A; Gionar, Yoyo R; Rusmiarto, Saptoro; Susapto, Dwiko; Andris, Heri; Elyazar, Iqbal R F; Barbara, Kathryn A; Munif, Amrul

    2010-06-01

    Surveillance of medically important mosquitoes is critical to determine the risk of mosquito-borne disease transmission. The purpose of this research was to test self-supporting, exposure-free bednet traps to survey mosquitoes. In the laboratory we tested human-baited and unbaited CDC light trap/cot bednet (CDCBN) combinations against three types of traps: the Mbita Trap (MIBITA), a Tent Trap (TENT), and a modified Townes style Malaise trap (TSM). In the laboratory, 16 runs comparing MBITA, TSM, and TENT to the CDCBN were conducted for a total of 48 runs of the experiment using 13,600 mosquitoes. The TENT trap collected significantly more mosquitoes than the CDCBN. The CDCBN collected significantly more than the MBITA and there was no difference between the TSM and the CDCBN. Two field trials were conducted in Cibuntu, Sukabumi, West Java, Indonesia. The first test compared human-baited and unbaited CDCBN, TENT, and TSM traps during six nights over two consecutive weeks per month from January, 2007 to September, 2007 for a total of 54 trapnights. A total of 8,474 mosquitoes representing 33 species were collected using the six trapping methods. The TENT-baited trap collected significantly more mosquitoes than both the CDCBN and the TSM. The second field trial was a comparison of the baited and unbaited TENT and CDCBN traps and Human Landing Collections (HLCs). The trial was carried out from January, 2008 to May, 2008 for a total of 30 trap nights. A total of 11,923 mosquitoes were collected representing 24 species. Human Landing Collections captured significantly more mosquitoes than either the TENT or the CDCBN. The baited and unbaited TENT collected significantly more mosquitoes than the CDCBN. The TENT trap was found to be an effective, light-weight substitute for the CDC light-trap, bednet combination in the field and should be considered for use in surveys of mosquito-borne diseases such as malaria, arboviruses, and filariasis.

  11. Metal Residue Deposition from Military Pyrotechnic Devices and Field Sampling Guidance

    Science.gov (United States)

    2012-05-01

    PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Jay L. Clausen, Julie Richardson, Nic Korte, Nancy Perron, Susan Taylor, Anthony Bednar, Patricia Tuminello...Julie Richardson1, Nic Korte3, Nancy Perron1, Susan Taylor1, Anthony Bednar2, Andrew Bray2, Patricia Tuminello2, William Jones2, Shawna Tazik2...in a warm water bath . Each sample was vacuum-filtered through a Whatman glass microfiber grade GF/A 1.6 µm filter. Several filters were required

  12. Drilling, Sampling, and Well-Installation Plan for the IFC Well Field, 300 Area

    Energy Technology Data Exchange (ETDEWEB)

    Bjornstad, Bruce N.; Horner, Jacob A.

    2008-05-05

    The 300 Area was selected as a location for an IFC because it offers excellent opportunities for field research on the influence of mass-transfer processes on uranium in the vadose zone and groundwater. The 300 Area was the location of nuclear fuel fabrication facilities and has more than 100 waste sites. Two of these waste sites, the North and South Process Ponds received large volumes of process waste from 1943 to 1975 and are thought to represent a significant source of the groundwater uranium plume in the 300 Area. Geophysical surveys and other characterization efforts have led to selection of the South Process Pond for the IFC.

  13. Quantitative analysis of steel samples using laser-induced breakdown spectroscopy with an artificial neural network incorporating a genetic algorithm.

    Science.gov (United States)

    Li, Kuohu; Guo, Lianbo; Li, Jiaming; Yang, Xinyan; Yi, Rongxing; Li, Xiangyou; Lu, Yongfeng; Zeng, Xiaoyan

    2017-02-01

    In this work, a genetic algorithm (GA) was employed to select the intensity ratios of the spectral lines belonging to the target and domain matrix elements, then these selected line-intensity ratios were taken as inputs to construct an analysis model based on an artificial neural network (ANN) to analyze the elements copper (Cu) and vanadium (V) in steel samples. The results revealed that the root mean square errors of prediction (RMSEPs) for the elements Cu and V can reach 0.0040 wt. % and 0.0039 wt. %, respectively. Compared to 0.0190 wt. % and 0.0201 wt. % of the conventional internal calibration approach, the reduction rates of the RMSEP values reached 78.9% and 80.6%, respectively. These results indicate that the GA combining ANN can excellently execute the quantitative analysis in laser-induced breakdown spectroscopy for steel samples and further improve analytical accuracy.

  14. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    DEFF Research Database (Denmark)

    Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local...... time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations...... point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected...

  15. Gene network and familial analyses uncover a gene network involving Tbx5/Osr1/Pcsk6 interaction in the second heart field for atrial septation.

    Science.gov (United States)

    Zhang, Ke K; Xiang, Menglan; Zhou, Lun; Liu, Jielin; Curry, Nathan; Heine Suñer, Damian; Garcia-Pavia, Pablo; Zhang, Xiaohua; Wang, Qin; Xie, Linglin

    2016-03-15

    Atrial septal defects (ASDs) are a common human congenital heart disease (CHD) that can be induced by genetic abnormalities. Our previous studies have demonstrated a genetic interaction between Tbx5 and Osr1 in the second heart field (SHF) for atrial septation. We hypothesized that Osr1 and Tbx5 share a common signaling networking and downstream targets for atrial septation. To identify this molecular networks, we acquired the RNA-Seq transcriptome data from the posterior SHF of wild-type, Tbx5(+/) (-), Osr1(+/-), Osr1(-/-) and Tbx5(+/-)/Osr1(+/-) mutant embryos. Gene set analysis was used to identify the Kyoto Encyclopedia of Genes and Genomes pathways that were affected by the doses of Tbx5 and Osr1. A gene network module involving Tbx5 and Osr1 was identified using a non-parametric distance metric, distance correlation. A subset of 10 core genes and gene-gene interactions in the network module were validated by gene expression alterations in posterior second heart field (pSHF) of Tbx5 and Osr1 transgenic mouse embryos, a time-course gene expression change during P19CL6 cell differentiation. Pcsk6 was one of the network module genes that were linked to Tbx5. We validated the direct regulation of Tbx5 on Pcsk6 using immunohistochemical staining of pSHF, ChIP-quantitative polymerase chain reaction and luciferase reporter assay. Importantly, we identified Pcsk6 as a novel gene associated with ASD via a human genotyping study of an ASD family. In summary, our study implicated a gene network involving Tbx5, Osr1 and Pcsk6 interaction in SHF for atrial septation, providing a molecular framework for understanding the role of Tbx5 in CHD ontogeny. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. The Study of Indoor and Field Trials on 2×8 MIMO Architecture in TD-LTE Network

    Directory of Open Access Journals (Sweden)

    Xiang Zhang

    2013-01-01

    the networks are based on frequency division duplexing (FDD. In this paper, measurement methods of four MIMO transmission modes (TMs in time division-LTE (TD-LTE are studied and analyzed. Link level simulation is carried out to evaluate the downlink throughput for different signal-to-noise ratios and parameter settings. Furthermore, indoor and field tests are also presented in the paper to investigate how real-world propagation affects the capacity and the error performance of MIMO transmission scheme. For the indoor test, radio channel emulators are applied to generate realistic wireless fading channel, while in the field trials, a live TD-LTE experiment cellular network is built, which contains several evolved nodeBs (eNBs and a precommercial user equipment (UE. It is shown from both simulation and tests results that MIMO deployment gives a substantial performance improvement compared with the third generation wireless networks.

  17. Critical current measurements of high-temperature superconducting short samples at a wide range of temperatures and magnetic fields.

    Science.gov (United States)

    Ma, Hongjun; Liu, Huajun; Liu, Fang; Zhang, Huahui; Ci, Lu; Shi, Yi; Lei, Lei

    2018-01-01

    High-Temperature Superconductors (HTS) are potential materials for high-field magnets, low-loss transmission cables, and Superconducting Magnetic Energy Storage (SMES) due to their high upper critical magnetic field (H c2 ) and critical temperature (T c ). The critical current (I c ) of HTS, which is one of the most important parameters for superconductor application, depends strongly on the magnetic fields and temperatures. A new I c measurement system that can carry out accurate I c measurement for HTS short samples with various temperatures (4.2-80 K), magnetic fields (0-14 T), and angles of the magnetic field (0°-90°) has been developed. The I c measurement system mainly consists of a measurement holder, temperature-control system, background magnet, test cryostat, data acquisition system, and DC power supply. The accuracy of temperature control is better than ±0.1 K over the 20-80 K range and ±0.05 K when measured below 20 K. The maximum current is over 1000 A with a measurement uncertainty of 1%. The system had been successfully used for YBa 2 Cu 3 O 7-x (YBCO) tapes I c determination with different temperatures and magnetic fields.

  18. Extracting the field-effect mobilities of random semiconducting single-walled carbon nanotube networks: A critical comparison of methods

    Science.gov (United States)

    Schießl, Stefan P.; Rother, Marcel; Lüttgens, Jan; Zaumseil, Jana

    2017-11-01

    The field-effect mobility is an important figure of merit for semiconductors such as random networks of single-walled carbon nanotubes (SWNTs). However, owing to their network properties and quantum capacitance, the standard models for field-effect transistors cannot be applied without modifications. Several different methods are used to determine the mobility with often very different results. We fabricated and characterized field-effect transistors with different polymer-sorted, semiconducting SWNT network densities ranging from low (≈6 μm-1) to densely packed quasi-monolayers (≈26 μm-1) with a maximum on-conductance of 0.24 μS μm-1 and compared four different techniques to evaluate the field-effect mobility. We demonstrate the limits and requirements for each method with regard to device layout and carrier accumulation. We find that techniques that take into account the measured capacitance on the active device give the most reliable mobility values. Finally, we compare our experimental results to a random-resistor-network model.

  19. A comparative study between nonlinear regression and artificial neural network approaches for modelling wild oat (Avena fatua) field emergence

    Science.gov (United States)

    Non-linear regression techniques are used widely to fit weed field emergence patterns to soil microclimatic indices using S-type functions. Artificial neural networks present interesting and alternative features for such modeling purposes. In this work, a univariate hydrothermal-time based Weibull m...

  20. The efficiency of reactant site sampling in network-free simulation of rule-based models for biochemical systems.

    Science.gov (United States)

    Yang, Jin; Hlavacek, William S

    2011-10-01

    Rule-based models, which are typically formulated to represent cell signaling systems, can now be simulated via various network-free simulation methods. In a network-free method, reaction rates are calculated for rules that characterize molecular interactions, and these rule rates, which each correspond to the cumulative rate of all reactions implied by a rule, are used to perform a stochastic simulation of reaction kinetics. Network-free methods, which can be viewed as generalizations of Gillespie's method, are so named because these methods do not require that a list of individual reactions implied by a set of rules be explicitly generated, which is a requirement of other methods for simulating rule-based models. This requirement is impractical for rule sets that imply large reaction networks (i.e. long lists of individual reactions), as reaction network generation is expensive. Here, we compare the network-free simulation methods implemented in RuleMonkey and NFsim, general-purpose software tools for simulating rule-based models encoded in the BioNetGen language. The method implemented in NFsim uses rejection sampling to correct overestimates of rule rates, which introduces null events (i.e. time steps that do not change the state of the system being simulated). The method implemented in RuleMonkey uses iterative updates to track rule rates exactly, which avoids null events. To ensure a fair comparison of the two methods, we developed implementations of the rejection and rejection-free methods specific to a particular class of kinetic models for multivalent ligand-receptor interactions. These implementations were written with the intention of making them as much alike as possible, minimizing the contribution of irrelevant coding differences to efficiency differences. Simulation results show that performance of the rejection method is equal to or better than that of the rejection-free method over wide parameter ranges. However, when parameter values are such that

  1. Impact of spatially constrained sampling of temporal contact networks on the evaluation of the epidemic risk

    CERN Document Server

    Vestergaard, Christian L; Génois, Mathieu; Poletto, Chiara; Colizza, Vittoria; Barrat, Alain

    2016-01-01

    The ability to directly record human face-to-face interactions increasingly enables the development of detailed data-driven models for the spread of directly transmitted infectious diseases at the scale of individuals. Complete coverage of the contacts occurring in a population is however generally unattainable, due for instance to limited participation rates or experimental constraints in spatial coverage. Here, we study the impact of spatially constrained sampling on our ability to estimate the epidemic risk in a population using such detailed data-driven models. The epidemic risk is quantified by the epidemic threshold of the susceptible-infectious-recovered-susceptible model for the propagation of communicable diseases, i.e. the critical value of disease transmissibility above which the disease turns endemic. We verify for both synthetic and empirical data of human interactions that the use of incomplete data sets due to spatial sampling leads to the underestimation of the epidemic risk. The bias is howev...

  2. Prediction of the moderator temperature field in a heavy water reactor based on a cellular neural network

    Directory of Open Access Journals (Sweden)

    S.O. Starkov

    2017-06-01

    Full Text Available Reactors with heavy water coolants and moderators have been used extensively in today's power industry. Monitoring of the moderator condition plays an important role in ensuring normal operation of a power plant. A cellular neural network, the architecture of which has been adapted for hardware implementation, is proposed for use in a system for prediction of the heavy water moderator temperature. A reactor model composed in accordance with the CANDU Darlington heavy water reactor design was used to form the training sample collection and to control correct operation of the neural network structure. The sample components for the adjustment and configuration of the network topology include key parameters that characterize the energy generation process in the core. The paper considers the feasibility of the temperature prediction only for the calandria's central cross-section. To solve this problem, the cellular neural network architecture has been designed, and major parts of the digital computational element and methods for their implementation based on an FPLD have also been developed. The method is described for organizing an optical coupling between individual neural modules within the network, which enables not only the restructuring of the topology in the training process, but also the assignment of priorities for the propagation of the information signals of neurons depending on the activity in a situation analysis at the neural network structure inlet. Asynchronous activation of cells was used based on an oscillating fractal network, the basis for which was a modified ring oscillator. The efficiency of training the proposed architecture using stochastic diffusion search algorithms is evaluated. A comparative analysis of the model behavior and the results of the neural network operation have shown that the use of the neural network approach is effective in safety systems of power plants.

  3. Determination of extremely low (236)U/(238)U isotope ratios in environmental samples by sector-field inductively coupled plasma mass spectrometry using high-efficiency sample introduction.

    Science.gov (United States)

    Boulyga, Sergei F; Heumann, Klaus G

    2006-01-01

    A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.

  4. Organizing heterogeneous samples using community detection of GIMME-derived resting state functional networks.

    Directory of Open Access Journals (Sweden)

    Kathleen M Gates

    Full Text Available Clinical investigations of many neuropsychiatric disorders rely on the assumption that diagnostic categories and typical control samples each have within-group homogeneity. However, research using human neuroimaging has revealed that much heterogeneity exists across individuals in both clinical and control samples. This reality necessitates that researchers identify and organize the potentially varied patterns of brain physiology. We introduce an analytical approach for arriving at subgroups of individuals based entirely on their brain physiology. The method begins with Group Iterative Multiple Model Estimation (GIMME to assess individual directed functional connectivity maps. GIMME is one of the only methods to date that can recover both the direction and presence of directed functional connectivity maps in heterogeneous data, making it an ideal place to start since it addresses the problem of heterogeneity. Individuals are then grouped based on similarities in their connectivity patterns using a modularity approach for community detection. Monte Carlo simulations demonstrate that using GIMME in combination with the modularity algorithm works exceptionally well--on average over 97% of simulated individuals are placed in the accurate subgroup with no prior information on functional architecture or group identity. Having demonstrated reliability, we examine resting-state data of fronto-parietal regions drawn from a sample (N = 80 of typically developing and attention-deficit/hyperactivity disorder (ADHD -diagnosed children. Here, we find 5 subgroups. Two subgroups were predominantly comprised of ADHD, suggesting that more than one biological marker exists that can be used to identify children with ADHD based from their brain physiology. Empirical evidence presented here supports notions that heterogeneity exists in brain physiology within ADHD and control samples. This type of information gained from the approach presented here can assist in

  5. Consideration of some sampling problems in the on-line analysis of batch processes by low-field NMR spectrometry.

    Science.gov (United States)

    Nordon, Alison; Diez-Lazaro, Alvaro; Wong, Chris W L; McGill, Colin A; Littlejohn, David; Weerasinghe, Manori; Mamman, Danladi A; Hitchman, Michael L; Wilkie, Jacqueline

    2008-03-01

    A low-field medium-resolution NMR spectrometer, with an operating frequency of 29 MHz for 1H, has been assessed for on-line process analysis. A flow cell that incorporates a pre-magnetisation region has been developed to minimise the decrease in the signal owing to incomplete polarisation effects. The homogeneous esterification reaction of crotonic acid and 2-butanol was monitored using a simple sampling loop; it was possible to monitor the progression of the reaction through changes in CH signal areas of butanol and butyl crotonate. On-line analysis of heterogeneous water-toluene mixtures proved more challenging and a fast sampling loop system was devised for use with a 5 L reactor. The fast sampling loop operated at a flow rate of 8 L min(-1) and a secondary sampling loop was used to pass a sub-sample through the NMR analyser at a slower (mL min(-1)) rate. It was shown that even with super-isokinetic sampling conditions, unrepresentative sampling could occur owing to inadequate mixing in the reactor. However, it was still possible to relate the 1H NMR signal obtained at a flow rate of 60 mL min(-1) to the composition of the reactor contents.

  6. Voices of Women in the Field--Creating Conversations: A Networking Approach for Women Leaders

    Science.gov (United States)

    Raskin, Candace F.; Haar, Jean M.; Robicheau, Jerry

    2010-01-01

    Professional networking is critical for school leaders. Networking has emerged in the literature as one of the major needs in attracting and retaining quality school leaders. There is evidence that professional networking offers a system for women to enhance their career opportunities. However, the evidence shows there are limited professional…

  7. Mapping the Field of Educational Administration Research: A Journal Citation Network Analysis

    Science.gov (United States)

    Wang, Yinying; Bowers, Alex J.

    2016-01-01

    Purpose: The purpose of this paper is to uncover how knowledge is exchanged and disseminated in the educational administration research literature through the journal citation network. Design/ Methodology/Approach: Drawing upon social network theory and citation network studies in other disciplines, the authors constructed an educational…

  8. Using Respondent Driven Sampling to Identify Malaria Risks and Occupational Networks among Migrant Workers in Ranong, Thailand.

    Directory of Open Access Journals (Sweden)

    Piyaporn Wangroongsarb

    Full Text Available Ranong Province in southern Thailand is one of the primary entry points for migrants entering Thailand from Myanmar, and borders Kawthaung Township in Myanmar where artemisinin resistance in malaria parasites has been detected. Areas of high population movement could increase the risk of spread of artemisinin resistance in this region and beyond.A respondent-driven sampling (RDS methodology was used to compare migrant populations coming from Myanmar in urban (Site 1 vs. rural (Site 2 settings in Ranong, Thailand. The RDS methodology collected information on knowledge, attitudes, and practices for malaria, travel and occupational histories, as well as social network size and structure. Individuals enrolled were screened for malaria by microscopy, Real Time-PCR, and serology.A total of 619 participants were recruited in Ranong City and 623 participants in Kraburi, a rural sub-district. By PCR, a total of 14 (1.1% samples were positive (2 P. falciparum in Site 1; 10 P. vivax, 1 Pf, and 1 P. malariae in Site 2. PCR analysis demonstrated an overall weighted prevalence of 0.5% (95% CI, 0-1.3% in the urban site and 1.0% (95% CI, 0.5-1.7% in the rural site for all parasite species. PCR positivity did not correlate with serological positivity; however, as expected there was a strong association between antibody prevalence and both age and exposure. Access to long-lasting insecticidal treated nets remains low despite relatively high reported traditional net use among these populations.The low malaria prevalence, relatively smaller networks among migrants in rural settings, and limited frequency of travel to and from other areas of malaria transmission in Myanmar, suggest that the risk for the spread of artemisinin resistance from this area may be limited in these networks currently but may have implications for regional malaria elimination efforts.

  9. A National Network to Advance the Field of Cancer and Female Sexuality

    Science.gov (United States)

    Goldfarb, Shari B.; Abramsohn, Emily; Andersen, Barbara L.; Baron, Shirley R.; Carter, Jeanne; Dickler, Maura; Florendo, Judith; Freeman, Leslie; Githens, Katherine; Kushner, David; Makelarski, Jennifer A.; Yamada, Diane; Lindau, Stacy Tessler

    2013-01-01

    Introduction Understanding sexual health issues in cancer patients is integral to care for the continuously growing cancer survivor population. Aim To create a national network of active clinicians and researchers focusing on the prevention and treatment of sexual problems in woman and girls with cancer. Methods Interdisciplinary teams from the University of Chicago and Memorial Sloan-Kettering Cancer Center jointly developed the mission for a national conference to convene clinicians and researchers in the field of cancer and female sexuality. The invitee list was developed by both institutions and further iterated through suggestions from invitees. The conference agenda focused on three high-priority topics under the guidance of a professional facilitator. Breakout groups were led by attendees recognized by collaborators as experts in those topics. Conference costs were shared by both institutions. Main Outcome Measure Development of Scientific Working Groups (SWGs) Results One hundred two clinicians and researchers were invited to attend the 1st National Conference on Cancer and Female Sexuality. Forty-three individuals from 20 different institutions across 14 states attended, including representation from eight NCI-funded cancer centers. Attendees included PhD researchers (n=19), physicians (n=16), and other health care professionals (n=8). Breakout groups included: 1) Defining Key Life Course Sexuality Issues; 2) Building a Registry; and 3) Implementing Sexual Health Assessment. Breakout group summaries incorporated group consensus on key points and priorities. These generated six SWGs with volunteer leaders to accelerate future research and discovery: 1) Technology-Based Interventions; 2) Basic Science; 3) Clinical Trials; 4) Registries; 5) Measurement; and 6) Secondary Data Analysis. Most attendees volunteered for at least one SWG (n=35), and many volunteered for two (n=21). Conclusion This 1st National Conference demonstrated high motivation and broad

  10. Tolerable rates of visual field progression in a population-based sample of patients with glaucoma.

    Science.gov (United States)

    Salonikiou, Angeliki; Founti, Panayiota; Kilintzis, Vassilis; Antoniadis, Antonis; Anastasopoulos, Eleftherios; Pappas, Theofanis; Raptou, Anastasia; Topouzis, Fotis

    2017-09-28

    To provide population-based data on the maximum tolerable rate of progression to avoid visual impairment (maxTRoP_VI) and blindness (maxTRoP_BL) from open-angle glaucoma (OAG). Participants with OAG in the Thessaloniki Eye Study (cross-sectional, population-based study in a European population) were included in the analysis. Visual impairment was defined as mean deviation (MD) equal to or worse than -12 dB and blindness as MD equal to or worse than -24 dB. Additional thresholds for visual impairment were tested. For each participant maxTRoP_VI was defined as the rate of progression which would not lead to visual impairment during expected lifetime. MaxTRoP_BL was defined accordingly. Both parameters were calculated for each OAG subject using age, sex, MD and life expectancy data. The eye with the better MD per subject was included in the analysis. Among 135 subjects with OAG, 123 had reliable visual fields and were included in the analysis. The mean age was 73±6 years and the median MD was -3.65±5.28 dB. Among those, 69.1% would have a maxTRoP_VI slower than -1 dB/year and 18.7% would have a maxTRoP_VI between -1 and -2 dB/year. Also, 72.4% would have a maxTRoP_BL slower than -2 dB/year. For all tested thresholds for visual impairment, approximately 86% of the OAG study participants would not be able to tolerate a rate of progression equal to or faster than -2 dB/year. The majority of patients with glaucoma in our study would have a maximum tolerable rate of progression slower than -1 dB/year in their better eye. Patient-tailored strategies to monitor the visual field are important, but raise the issue of feasibility with regard to the number of visual field tests needed. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  11. Multistatic Array Sampling Scheme for Fast Near-Field Image Reconstruction

    Science.gov (United States)

    2016-01-01

    frequency response of the scene is sampled on a regularly spaced two-dimensional grid. Following collection of all measurements, the image can be...images were formed with 18-26.5 GHz stimulus , using 160 frequency points. The 3D images were realized with 21 depth slices, spaced by 0.015m. Fig. 7...McMakin, and T. E. Hall, “Three-dimensional millimeter-wave imaging for concealed weapon detection,” IEEE Trans- actions on Microwave Theory and

  12. Biocompatible channels for field-flow fractionation of biological samples: correlation between surface composition and operating performance.

    Science.gov (United States)

    Roda, Barbara; Cioffi, Nicola; Ditaranto, Nicoletta; Zattoni, Andrea; Casolari, Sonia; Melucci, Dora; Reschiglian, Pierluigi; Sabbatini, Luigia; Valentini, Antonio; Zambonin, Pier Giorgio

    2005-02-01

    Biocompatible methods capable of rapid purification and fractionation of analytes from complex natural matrices are increasingly in demand, particularly at the forefront of biotechnological applications. Field-flow fractionation is a separation technique suitable for nano-sized and micro-sized analytes among which bioanalytes are an important family. The objective of this preliminary study is to start a more general approach to field-flow fractionation for bio-samples by investigation of the correlation between channel surface composition and biosample adhesion. For the first time we report on the use of X-ray photoelectron spectroscopy (XPS) to study the surface properties of channels of known performance. By XPS, a polar hydrophobic environment was found on PVC material commonly used as accumulation wall in gravitational field-flow fractionation (GrFFF), which explains the low recovery obtained when GrFFF was used to fractionate a biological sample such as Staphylococcus aureus. An increase in separation performance was obtained first by conditioning the accumulation wall with bovine serum albumin and then by using the ion-beam sputtering technique to cover the GrFFF channel surface with a controlled inert film. XPS analysis was also employed to determine the composition of membranes used in hollow-fiber flow field-flow fractionation (HF FlFFF). The results obtained revealed homogeneous composition along the HF FlFFF channel both before and after its use for fractionation of an intact protein such as ferritin.

  13. Qualitative analysis of SBS modifier in asphalt pavements using field samples

    Science.gov (United States)

    Chi, Fengxia; Liu, Zhifei

    2017-06-01

    Series of tests are implemented to analysis the related characteristics of common asphalt and unknown asphalt mainly using Fourier Transform Infrared (FTIR) and Dynamic Shear Rheometer (DSR) for chemical compositions and rheological properties of asphalt, respectively. In addition, a series of mechanical properties were performed on asphalt mixtures, including indirect tensile strength test and three point bending test at low temperature. Experimental results indicated that compared with common asphalt, the characteristic absorption peak of the unknown asphalt are appeared at 966cm-1and 699cm-1, which are accordant with the SBS modifier. The results of DSR indicated that the unknown asphalt’s complex modulus is higher and the phase angle is lower. The mechanical tests indicated that some properties of the unknown mixture samples are increased by 24.7%∼41.8% compared with common pavement sample, like the indirect tensile strength, the bending test at low temperature and indirect tensile resilient modulus. Comprehensive analysis indicates that SBS modifier is existed in the unknown asphalt pavement.

  14. A non-destructive sampling protocol for field studies of seed dispersal by fishes.

    Science.gov (United States)

    Correa, S B; Anderson, J T

    2016-05-01

    This paper presents a standardized protocol for the non-lethal capture of fishes, sampling of stomach contents and quantification of seed dispersal efficiency by frugivorous fishes. Neotropical pacu Piaractus mesopotamicus individuals were collected with fruit-baited hooks. The diets of 110 fish were sampled using a lavage method, which retrieved >90% of stomach contents of both juveniles and adults and allowed individuals to recover within 5 min of treatment. The proportional volume of six food categories was similar for stomachs and whole digestive tracts retrieved by dissection. Fruit pulp was proportionally lower in the stomach. The abundance and species richness of intact seeds increased with fish size independent of whether only stomachs or whole digestive tracts were analysed. The analysis of stomach contents accounted for 62·5% of the total species richness of seeds dispersed by P. mesopotamicus and 96% of common seeds (seed species retrieved from more than one fish). Germination trials revealed that seed viability was similar for seeds collected from the stomach via lavage and seeds that passed through the entire digestive tract. Therefore, stomach contents provide an unbiased representation of the dietary patterns and seed dispersal of frugivorous fishes. © 2016 The Fisheries Society of the British Isles.

  15. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  16. Disease named entity recognition by combining conditional random fields and bidirectional recurrent neural networks

    Science.gov (United States)

    Wei, Qikang; Chen, Tao; Xu, Ruifeng; He, Yulan; Gui, Lin

    2016-01-01

    The recognition of disease and chemical named entities in scientific articles is a very important subtask in information extraction in the biomedical domain. Due to the diversity and complexity of disease names, the recognition of named entities of diseases is rather tougher than those of chemical names. Although there are some remarkable chemical named entity recognition systems available online such as ChemSpot and tmChem, the publicly available recognition systems of disease named entities are rare. This article presents a system for disease named entity recognition (DNER) and normalization. First, two separate DNER models are developed. One is based on conditional random fields model with a rule-based post-processing module. The other one is based on the bidirectional recurrent neural networks. Then the named entities recognized by each of the DNER model are fed into a support vector machine classifier for combining results. Finally, each recognized disease named entity is normalized to a medical subject heading disease name by using a vector space model based method. Experimental results show that using 1000 PubMed abstracts for training, our proposed system achieves an F1-measure of 0.8428 at the mention level and 0.7804 at the concept level, respectively, on the testing data of the chemical-disease relation task in BioCreative V. Database URL: http://219.223.252.210:8080/SS/cdr.html PMID:27777244

  17. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...... model for a ∼1 mm(2) patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its...

  18. Downscaling Transpiration from the Field to the Tree Scale using the Neural Network Approach

    Science.gov (United States)

    Hopmans, J. W.

    2015-12-01

    Estimating actual evapotranspiration (ETa) spatial variability in orchards is key when trying to quantify water (and associated nutrients) leaching, both with the mass balance and inverse modeling methods. ETa measurements however generally occur at larger scales (e.g. Eddy-covariance method) or have a limited quantitative accuracy. In this study we propose to establish a statistical relation between field ETa and field averaged variables known to be closely related to it, such as stem water potential (WP), soil water storage (WS) and ETc. For that we use 4 years of soil and almond trees water status data to train artificial neural networks (ANNs) predicting field scale ETa and downscale the relation to the individual tree scale. ANNs composed of only two neurons in a hidden layer (11 parameters on total) proved to be the most accurate (overall RMSE = 0.0246 mm/h, R2 = 0.944), seemingly because adding more neurons generated overfitting of noise in the training dataset. According to the optimized weights in the best ANNs, the first hidden neuron could be considered in charge of relaying the ETc information while the other one would deal with the water stress response to stem WP, soil WS, and ETc. As individual trees had specific signatures for combinations of these variables, variability was generated in their ETa responses. The relative canopy cover was the main source of variability of ETa while stem WP was the most influent factor for the ETa / ETc ratio. Trees on drip-irrigated side of the orchard appeared to be less affected by low estimated soil WS in the root zone than on the fanjet micro-sprinklers side, possibly due to a combination of (i) more substantial root biomass increasing the plant hydraulic conductance, (ii) bias in the soil WS estimation due to soil moisture heterogeneity on the drip-side, and (iii) the access to deeper water resource. Tree scale ETa responses are in good agreement with soil-plant water relations reported in the literature, and

  19. NGSI FY15 Final Report. Innovative Sample Preparation for in-Field Uranium Isotopic Determinations

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Thomas M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Meyers, Lisa [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-10

    Our FY14 Final Report included an introduction to the project, background, literature search of uranium dissolution methods, assessment of commercial off the shelf (COTS) automated sample preparation systems, as well as data and results for dissolution of bulk quantities of uranium oxides, and dissolution of uranium oxides from swipe filter materials using ammonium bifluoride (ABF). Also, discussed were reaction studies of solid ABF with uranium oxide that provided a basis for determining the ABF/uranium oxide dissolution mechanism. This report details the final experiments for optimizing dissolution of U3O8 and UO2 using ABF and steps leading to development of a Standard Operating Procedure (SOP) for dissolution of uranium oxides on swipe filters.

  20. Midazolam sedates Passeriformes for field sampling but affects multiple venous blood analytes

    Directory of Open Access Journals (Sweden)

    Heatley JJ

    2015-01-01

    Full Text Available J Jill Heatley,1 Jennifer Cary,2,3 Lyndsey Kingsley,1 Hughes Beaufrere,4 Karen E Russell,5 Gary Voelker2,3 1Department of Small Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences, 2Department of Wildlife and Fisheries Sciences, 3Texas A&M Biodiversity Research and Teaching Collections, Texas A&M University, College Station, TX, USA; 4Health Sciences Centre, Ontario Veterinary College, University of Guelph, Guelph, ON, Canada; 5Department of Veterinary Pathobiology, College of Veterinary Medicine and Biomedical Sciences, College Station, TX, USA Abstract: Feasibility and effect of midazolam administration on blood analytes and for sedation of Passeriformes being collected in a larger study of genetic biodiversity was assessed. Midazolam (5.6±2.7 mg/kg was administered intranasally prior to sampling, euthanasia, and specimen preparation of 104 passerine birds. Each bird was assessed for sedation score and then multiple analytes were determined from jugular blood samples using the i-STAT® point of care analyzer at “bird side”. Most birds were acceptably sedated, sedation became more pronounced as midazolam dose increased, and only a single bird died. Electrolyte concentrations and venous blood gas analytes were affected by midazolam administration while blood pH, packed cell volume, hemoglobin, and calculated hematocrit were not. Intranasal midazolam gives adequate sedation and is safe for short-term use in free-living Passeriformes. Based on venous blood analyte data, sedation of Passeriformes prior to handling appears to reduce stress but also produces venous blood gas differences consistent with hypoventilation relative to birds which were not given midazolam. Further study is recommended to investigate midazolam's continued use in free-living avian species. Studies should include safety, reversal and recovery, effect upon additional endogenous analytes, and compatibility with studies of ecology and toxicology

  1. Performance evaluation of currently used portable X ray fluorescence instruments for measuring the lead content of paint in field samples.

    Science.gov (United States)

    Muller, Yan; Favreau, Philippe; Kohler, Marcel

    2014-01-01

    Field-portable X-ray fluorescence (FP-XRF) instruments are important for non-destructive, rapid and convenient measurements of lead in paint, in view of potential remediation. Using real-life paint samples, we compared measurements from three FP-XRF instruments currently used in Switzerland with laboratory measurements using inductively coupled plasma mass spectrometry after complete sample dissolution. Two FP-XRF devices that functioned by lead L shell excitation frequently underestimated the lead concentration of samples. Lack of accuracy correlated with lead depth and/or the presence of additional metal elements (Zn, Ba or Ti). A radioactive source emitter XRF that enabled the additional K shell excitation showed higher accuracy and precision, regardless of the depth of the lead layer in the sample or the presence of other elements. Inspection of samples by light and electron microscopy revealed the diversity of real-life samples, with multi-layered paints showing various depths of lead and other metals. We conclude that the most accurate measurements of lead in paint are currently obtained with instruments that provide at least sufficient energy for lead K shell excitation.

  2. On-field measurement trial of 4×128 Gbps PDM-QPSK signals by linear optical sampling

    Science.gov (United States)

    Bin Liu; Wu, Zhichao; Fu, Songnian; Feng, Yonghua; Liu, Deming

    2017-02-01

    Linear optical sampling is a promising characterization technique for advanced modulation formats, together with digital signal processing (DSP) and software-synchronized algorithm. We theoretically investigate the acquisition of optical sampling, when the high-speed signal under test is either periodic or random. Especially, when the profile of optical sampling pulse is asymmetrical, the repetition frequency of sampling pulse needs careful adjustment in order to obtain correct waveform. Then, we demonstrate on-field measurement trial of commercial four-channel 128 Gbps polarization division multiplexing quadrature phase shift keying (PDM-QPSK) signals with truly random characteristics by self-developed equipment. A passively mode-locked fiber laser (PMFL) with a repetition frequency of 95.984 MHz is used as optical sampling source, meanwhile four balanced photo detectors (BPDs) with 400 MHz bandwidth and four-channel analog-to-digital convertor (ADC) with 1.25 GS/s sampling rate are used for data acquisition. The performance comparison with conventional optical modulation analyzer (OMA) verifies that the self-developed equipment has the advantages of low cost, easy implementation, and fast response.

  3. Remedial investigation sampling and analysis plan for J-Field, Aberdeen Proving Ground, Maryland: Volume 2, Quality Assurance Project Plan

    Energy Technology Data Exchange (ETDEWEB)

    Prasad, S.; Martino, L.; Patton, T.

    1995-03-01

    J-Field encompasses about 460 acres at the southern end of the Gunpowder Neck Peninsula in the Edgewood Area of APG (Figure 2.1). Since World War II, the Edgewood Area of APG has been used to develop, manufacture, test, and destroy chemical agents and munitions. These materials were destroyed at J-Field by open burning and open detonation (OB/OD). For the purposes of this project, J-Field has been divided into eight geographic areas or facilities that are designated as areas of concern (AOCs): the Toxic Burning Pits (TBP), the White Phosphorus Burning Pits (WPP), the Riot Control Burning Pit (RCP), the Robins Point Demolition Ground (RPDG), the Robins Point Tower Site (RPTS), the South Beach Demolition Ground (SBDG), the South Beach Trench (SBT), and the Prototype Building (PB). The scope of this project is to conduct a remedial investigation/feasibility study (RI/FS) and ecological risk assessment to evaluate the impacts of past disposal activities at the J-Field site. Sampling for the RI will be carried out in three stages (I, II, and III) as detailed in the FSP. A phased approach will be used for the J-Field ecological risk assessment (ERA).

  4. Effect of sample container morphology on agglomeration dynamics of magnetic nanoparticles under magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Dae Seong; Kim, Hack Jin [Dept. of Chemistry, Chungnam National University, Daejeon (Korea, Republic of)

    2016-12-15

    The superparamagnetic magnetite nanoparticles have been used extensively in medical and biological applications, and agglomeration of magnetic nanoparticles is employed in the purification of water and proteins. The magnetic weight can be measured with a conventional electronic balance. Details of the experimental setup have been previously reported. That is, complex energy landscape involved in the agglomeration is changing with progress. Simulation of colloidal magnetic particles under magnetic field shows that the chain of particles is energetically more favorable than the ring and that the transition barrier between the chain and the ring is very low. The energy barriers among entangled nanoparticles of the agglomerate seem to be much more complicated than those among colloidal particles. The energy barrier distributions at 1000 min are similar for the two containers; however, the trend of blue shift and broadening is much more evident in the case of conical tube. These results indicate that the potential energy surface for agglomeration is modified more significantly in the conical tube which makes the agglomerate denser.

  5. Compensating for population sampling in simulations of epidemic spread on temporal contact networks

    CERN Document Server

    Génois, Mathieu; Cattuto, Ciro; Barrat, Alain

    2015-01-01

    Data describing human interactions often suffer from incomplete sampling of the underlying population. As a consequence, the study of contagion processes using data-driven models can lead to a severe underestimation of the epidemic risk. Here we present a systematic method to correct this bias and obtain an accurate estimation of the risk in the context of epidemic models informed by high-resolution time-resolved contact data. We consider several such data sets collected in various contexts and perform controlled resampling experiments. We show that the statistical information contained in the resampled data allows us to build surrogate versions of the unknown contacts and that simulations of epidemic processes using these surrogate data sets yield good estimates of the outcome of simulations performed using the complete data set. We discuss limitations and potential improvements of our method.

  6. Field evaluation of a low-cost, high-density air quality monitoring network: BEACO2N

    Science.gov (United States)

    Kim, J.; Shusterman, A.; Newman, C.; Cohen, R. C.

    2016-12-01

    Low-cost air quality sensors are becoming widely available encouraging the development dense sensor networks. However, the accuracy and reliability of these new sensors are not well characterized and the added benefits of networks have yet to be clearly described or realized in an application. We describe the deployment and evaluation of a low-cost, high-density air quality monitoring network consisting of approximately 25 nodes distributed at 2km spacing in the East Bay region of the San Francisco Bay Area as part of the Berkeley Atmospheric CO2 Observation Network (BEACO2N). Measurements of CO2, CO, NO, NO2, O­3 and aerosol at the nodes are evaluated based on laboratory and field experiments. We describe approaches to in-field calibration and evaluation that take advantage of cross-sensitivities of the sensors and response to varying temperature. Observations from the low cost sensors are compared to standard regulatory measurements. We show that the sensors provide signals that correlate with nearby traffic and other environmental variables and demonstrate the feasibility of deploying low-cost, high-density air quality monitoring network.

  7. Cascading a systolic array and a feedforward neural network for navigation and obstacle avoidance using potential fields

    Science.gov (United States)

    Plumer, Edward S.

    1991-01-01

    A technique is developed for vehicle navigation and control in the presence of obstacles. A potential function was devised that peaks at the surface of obstacles and has its minimum at the proper vehicle destination. This function is computed using a systolic array and is guaranteed not to have local minima. A feedfoward neural network is then used to control the steering of the vehicle using local potential field information. In this case, the vehicle is a trailer truck backing up. Previous work has demonstrated the capability of a neural network to control steering of such a trailer truck backing to a loading platform, but without obstacles. Now, the neural network was able to learn to navigate a trailer truck around obstacles while backing toward its destination. The network is trained in an obstacle free space to follow the negative gradient of the field, after which the network is able to control and navigate the truck to its target destination in a space of obstacles which may be stationary or movable.

  8. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    Science.gov (United States)

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Secure transfer of surveillance data over Internet using Virtual Private Network technology. Field trial between STUK and IAEA

    Energy Technology Data Exchange (ETDEWEB)

    Smartt, H.; Martinez, R.; Caskey, S. [Sandia National Laboratories (United States); Honkamaa, T.; Ilander, T.; Poellaenen, R. [Radiation and Nuclear Safety Authority, Helsinki (Finland); Jeremica, N.; Ford, G. [Nokia (Finland)

    2000-01-01

    One of the primary concerns of employing remote monitoring technologies for IAEA safeguards applications is the high cost of data transmission. Transmitting data over the Internet has been shown often to be less expensive than other data transmission methods. However, data security of the Internet is often considered to be at a low level. Virtual Private Networks has emerged as a solution to this problem. A field demonstration was implemented to evaluate the use of Virtual Private Networks (via the Internet) as a means for data transmission. Evaluation points included security, reliability and cost. The existing Finnish Remote Environmental Monitoring System, located at the STUK facility in Helsinki, Finland, served as the field demonstration system. Sandia National Laboratories (SNL) established a Virtual Private Network between STUK (Radiation and Nuclear Safety Authority) Headquarters in Helsinki, Finland, and IAEA Headquarters in Vienna, Austria. Data from the existing STUK Remote Monitoring System was viewed at the IAEA via this network. The Virtual Private Network link was established in a proper manner, which guarantees the data security. Encryption was verified using a network sniffer. No problems were? encountered during the test. In the test system, fixed costs were higher than in the previous system, which utilized telephone lines. On the other hand transmission and operating costs are very low. Therefore, with low data amounts, the test system is not cost-effective, but if the data amount is tens of Megabytes per day the use of Virtual Private Networks and Internet will be economically justifiable. A cost-benefit analysis should be performed for each site due to significant variables. (orig.)

  10. Seasonal rockfall risk assessment along transportation network: a sample from Mallorca (Spain)

    Science.gov (United States)

    Mateos, Rosa Maria; Garcia, Inmaculada; Reichenbach, Paola; Herrera, Gerardo; Rius, Joan; Aguilo, Raul; Roldan, Francisco J.

    2014-05-01

    In the literature there are numerous works focusing on rockfall risk assessment along transportation corridors which take into account several factors, including the annual average traffic volume. Few papers examine in detail examples with a strong seasonal distribution of people travelling along roads, in particular in regions with a great importance for tourism. In these areas, potential blockages along the road network can cause significant economic losses, considering not only direct costs, but also indirect ones related to a reduction in tourism arrivals, with the consequent loss of jobs and profits. In this work we present a methodology for rockfall risk assessment focusing on the reliability and applicability of the evaluation in a test site located in the island of Mallorca, a region which welcomes over 11.3 million visitors per year and where tourism represents the main source of income (83% of its GDP). The Ma-10 road (111 km), located in the north-western sector of the island along the coastal face of the Tramuntana range, has been affected by 85 rockfall events during the past 18 years, which caused repairing costs valued at approximately 2M Euro (Mateos et al., 2013). Rockfalls are triggered by heavy rainfall and freeze-thaw cycles and, for these reasons, autumn and winter can be considered as the most hazardous seasons (Mateos et al., 2012). The road has heavy traffic estimated at 7.200 vehicles per day on average, with a seasonal variation of people travelling in vehicles, the summer being most prominent- up to 6 times the average- due to the pattern of tourism arrivals. To analyse the seasonal rockfall hazard and risk along the Ma-10 road, we obtained the extent of the areas potentially subject to rockfall hazards using STONE, a physically-based rockfall simulation computer program (Guzzetti et al, 2002). The availability of historical rockfalls mapped in detail allowed checking the STONE results, and identifying a hazardous area on the southern

  11. A Comparative Field Monitoring of Column Shortenings in Tall Buildings Using Wireless and Wired Sensor Network Systems

    Directory of Open Access Journals (Sweden)

    Sungho Lee

    2016-01-01

    Full Text Available A comparative field measurement for column shortening of tall buildings is presented in this study, with a focus on the reliability and stability of a wireless sensor network. A wireless sensor network was used for monitoring the column shortenings of a 58-story building under construction. The wireless sensor network, which was composed of sensor and master nodes, employed the ultra-high-frequency band and CDMA communication methods. To evaluate the reliability and stability of the wireless sensor network system, the column shortenings were also measured using a conventional wired monitoring system. Two vibration wire gauges were installed in each of the selected 7 columns and 3 walls. Measurements for selected columns and walls were collected for 270 days after casting of the concrete. The results measured by the wireless sensor network were compared with the results of the conventional method. The strains and column shortenings measured using both methods showed good agreement for all members. It was verified that the column shortenings of tall buildings could be monitored using the wireless sensor network system with its reliability and stability.

  12. From Social Integration to Social Isolation: The Relationship Between Social Network Types and Perceived Availability of Social Support in a National Sample of Older Canadians.

    Science.gov (United States)

    Harasemiw, Oksana; Newall, Nancy; Shooshtari, Shahin; Mackenzie, Corey; Menec, Verena

    2017-01-01

    It is well-documented that social isolation is detrimental to health and well-being. What is less clear is what types of social networks allow older adults to get the social support they need to promote health and well-being. In this study, we identified social network types in a national sample of older Canadians and explored whether they are associated with perceived availability of different types of social support (affectionate, emotional, or tangible, and positive social interactions). Data were drawn from the baseline questionnaire of the Canadian Longitudinal Study on Aging for participants aged 65-85 (unweighted n = 8,782). Cluster analyses revealed six social network groups. Social support generally declined as social networks became more restricted; however, different patterns of social support availability emerged for different social network groups. These findings suggest that certain types of social networks place older adults at risk of not having met specific social support needs.

  13. Irreversibility line and magnetic field dependence of the critical current in superconducting MgB sub 2 bulk samples

    CERN Document Server

    Gioacchino, D D; Tripodi, P; Grimaldi, G

    2003-01-01

    The third harmonic components of the ac susceptibility of MgB sub 2 bulk samples have been measured as a function of applied magnetic fields, together with standard magnetization cycles. The irreversibility line (IL) of the magnetic field has been extracted from the onset of the third harmonic components. Using a (1 - t) supalpha glass/liquid best fit where alpha 1.27 IL shows a coherent length xi divergence with exponent nu = 0.63, which indicates a 3D behaviour. Moreover, using the numerical solution of the non-linear magnetic diffusion equation, considering the creep model in a 3D vortex glass, a good description of the vortex dynamics has been obtained. The behaviour of the magnetization amplitude (approx Hz) and the ac susceptibility signals (kHz), at different applied magnetic fields, 3.5 T < H sub d sub c < 4.5 T, and at the reduced temperature 0.86 < t < 0.93 (T = 22 K), shows that the superconducting dynamic response of vortices in the MgB sub 2 samples is not evidently dependent on the f...

  14. Field Research and Laboratory Sample Analysis of Dust-Water-Organics-Life from Mars Analogue Extreme Environments

    Science.gov (United States)

    Foing, Bernard H.; Ehrenfreund, Pascale; ILEWG EuroMoonMars Team

    2015-08-01

    We describe results from the data analysis from a series of field research campaigns (ILEWG EuroMoonMars campaigns 2009* to 2013) in the extreme environment of the Utah desert relevant to habitability and astrobiology in Mars environments, and in order to help in the interpretation of Mars missions measurements from orbit (MEX, MRO) or from the surface (MER, MSL). We discuss results relevant to the scientific study of the habitability factors influenced by the properties of dust, organics, water history and the diagnostics and characterisation of microbial life. We also discuss perspectives for the preparation of future lander and sample return missions. We deployed at Mars Desert Research station, Utah, a suite of instruments and techniques including sample collection, context imaging from remote to local and microscale, drilling, spectrometers and life sensors. We analyzed how geological and geochemical evolution a ected local parameters (mineralogy, organics content, environment variations) and the habitability and signature of organics and biota. We find high diversity in the composition of soil samples even when collected in close proximity, the low abundances of detectable PAHs and amino acids and the presence of biota of all three domains of life with signi cant heterogeneity. An extraordinary variety of putative extremophiles was observed. A dominant factor seems to be soil porosity and lower clay-sized particle content. A protocol was developed for sterile sampling, contamination issues, and the diagnostics of biodiversity via PCR and DGGE analysis in soils and rocks samples. We compare 2009 campaign results to new measurements from 2010-2013 campaigns: comparison between remote sensing and in-situ measurements; the study of minerals; the detection of organics and signs of life.References * in Foing, Stoker Ehrenfreund (Editors, 2011) Astrobiology field Research in Moon/Mars Analogue Environments", Special Issue of International Journal of Astrobiology

  15. SampleCNN: End-to-End Deep Convolutional Neural Networks Using Very Small Filters for Music Classification

    Directory of Open Access Journals (Sweden)

    Jongpil Lee

    2018-01-01

    Full Text Available Convolutional Neural Networks (CNN have been applied to diverse machine learning tasks for different modalities of raw data in an end-to-end fashion. In the audio domain, a raw waveform-based approach has been explored to directly learn hierarchical characteristics of audio. However, the majority of previous studies have limited their model capacity by taking a frame-level structure similar to short-time Fourier transforms. We previously proposed a CNN architecture which learns representations using sample-level filters beyond typical frame-level input representations. The architecture showed comparable performance to the spectrogram-based CNN model in music auto-tagging. In this paper, we extend the previous work in three ways. First, considering the sample-level model requires much longer training time, we progressively downsample the input signals and examine how it affects the performance. Second, we extend the model using multi-level and multi-scale feature aggregation technique and subsequently conduct transfer learning for several music classification tasks. Finally, we visualize filters learned by the sample-level CNN in each layer to identify hierarchically learned features and show that they are sensitive to log-scaled frequency.

  16. Air sampling and analysis method for volatile organic compounds (VOCs) related to field-scale mortality composting operations.

    Science.gov (United States)

    Akdeniz, Neslihan; Koziel, Jacek A; Ahn, Hee-Kwon; Glanville, Thomas D; Crawford, Benjamin P; Raman, D Raj

    2009-07-08

    In biosecure composting, animal mortalities are so completely isolated during the degradation process that visual inspection cannot be used to monitor progress or the process status. One novel approach is to monitor the volatile organic compounds (VOCs) released by decaying mortalities and to use them as biomarkers of the process status. A new method was developed to quantitatively analyze potential biomarkers--dimethyl disulfide, dimethyl trisulfide, pyrimidine, acetic acid, propanoic acid, 3-methylbutanoic acid, pentanoic acid, and hexanoic acid--from field-scale biosecure mortality composting units. This method was based on collection of air samples from the inside of biosecure composting units using portable pumps and solid phase microextraction (SPME). Among four SPME fiber coatings, 85 microm CAR/PDMS was shown to extract the greatest amount of target analytes during a 1 h sampling time. The calibration curves had high correlation coefficients, ranging from 96 to 99%. Differences between the theoretical concentrations and those estimated from the calibration curves ranged from 1.47 to 20.96%. Method detection limits of the biomarkers were between 11 pptv and 572 ppbv. The applicability of the prepared calibration curves was tested for air samples drawn from field-scale swine mortality composting test units. Results show that the prepared calibration curves were applicable to the concentration ranges of potential biomaker compounds in a biosecure animal mortality composting unit.

  17. Spatial Distribution and Minimum Sample Size for Overwintering Larvae of the Rice Stem Borer Chilo suppressalis (Walker) in Paddy Fields.

    Science.gov (United States)

    Arbab, A

    2014-10-01

    The rice stem borer, Chilo suppressalis (Walker), feeds almost exclusively in paddy fields in most regions of the world. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling procedures, and adopting precise agricultural techniques. Field experiments were conducted during 2011 and 2012 to estimate the spatial distribution pattern of the overwintering larvae. Data were analyzed using five distribution indices and two regression models (Taylor and Iwao). All of the indices and Taylor's model indicated random spatial distribution pattern of the rice stem borer overwintering larvae. Iwao's patchiness regression was inappropriate for our data as shown by the non-homogeneity of variance, whereas Taylor's power law fitted the data well. The coefficients of Taylor's power law for a combined 2 years of data were a = -0.1118, b = 0.9202 ± 0.02, and r (2) = 96.81. Taylor's power law parameters were used to compute minimum sample size needed to estimate populations at three fixed precision levels, 5, 10, and 25% at 0.05 probabilities. Results based on this equation parameters suggesting that minimum sample sizes needed for a precision level of 0.25 were 74 and 20 rice stubble for rice stem borer larvae when the average larvae is near 0.10 and 0.20 larvae per rice stubble, respectively.

  18. Characterization of Intracellular and Extracellular Saxitoxin Levels in Both Field and Cultured Alexandrium spp. Samples from Sequim Bay, Washington

    Directory of Open Access Journals (Sweden)

    Vera L. Trainer

    2008-05-01

    Full Text Available Traditionally, harmful algal bloom studies have primarily focused on quantifying toxin levels contained within the phytoplankton cells of interest. In the case of paralytic shellfish poisoning toxins (PSTs, intracellular toxin levels and the effects of dietary consumption of toxic cells by planktivores have been well documented. However, little information is available regarding the levels of extracellular PSTs that may leak or be released into seawater from toxic cells during blooms. In order to fully evaluate the risks of harmful algal bloom toxins in the marine food web, it is necessary to understand all potential routes of exposure. In the present study, extracellular and intracellular PST levels were measured in field seawater samples (collected weekly from June to October 2004- 2007 and in Alexandrium spp. culture samples isolated from Sequim Bay, Washington. Measurable levels of intra- and extra-cellular toxins were detected in both field and culture samples via receptor binding assay (RBA and an enzyme-linked immunosorbent assay (ELISA. Characterization of the PST toxin profile in the Sequim Bay isolates by preMar. column oxidation and HPLC-fluorescence detection revealed that gonyautoxin 1 and 4 made up 65 ± 9.7 % of the total PSTs present. Collectively, these data confirm that extracellular PSTs are present during blooms of Alexandrium spp. in the Sequim Bay region.

  19. [Determination of seven aromatic amines in hair dyes by capillary electrophoresis coupled with field-amplified sample stacking].

    Science.gov (United States)

    Lu, Yuchao; Wang, Haiyan; Song, Pingping; Liu, Shuhui

    2011-11-01

    A method for the determination of 4,4'-methylenedianiline, aniline, o-anisidine, 3, 4-dimethylaniline, p-anisidine, 3-aminophenol, 1-naphthylamine in hair dyes was established by capillary electrophoresis coupled with field-amplified sample stacking. The optimum running buffer was an aqueous solution containing 0.15 mol/L NaH2PO4 and 0.015 mol/L trolamine (pH 2.3), and the baseline separation was achieved within 6.5 min. The effects of phosphoric acid and acetonitrile concentration in the sample matrix, the length of the preinjection water plug, and the sample injection voltage and time on the stacking efficiency were investigated. The optimum stacking conditions for the real samples included a water plug of 3.45 kPa (0.5 psi) x 6 s, the addition of 40% (v/v) acetonitrile and 0.6 x 10(-3) mol/L phosphoric acid to the sample solution and a sample injection of 10 kV x 10 s. The seven analytes all showed good linearities (R2 > 0.996) within 3 - 1 000 microg/L, with the detection limits in the range of 0.26 - 2.75 microg/L. The method was shown to provide over 1 - 3 magnitudes of sensitivity enhancement. 3-Aminophenol was found in two black hair dyes, and the amounts were 7.32 mg/g and 1.34 mg/g, individually. The recoveries ranged from 74% - 108%. The proposed approach may find widespread applications for the determination of trace aromatic amines and other cationic analytes in various sample matrixes.

  20. Combination of microsecond and nanosecond pulsed electric field treatments for inactivation of Escherichia coli in water samples.

    Science.gov (United States)

    Žgalin, Maj Kobe; Hodžić, Duša; Reberšek, Matej; Kandušer, Maša

    2012-10-01

    Inactivation of microorganisms with pulsed electric fields is one of the nonthermal methods most commonly used in biotechnological applications such as liquid food pasteurization and water treatment. In this study, the effects of microsecond and nanosecond pulses on inactivation of Escherichia coli in distilled water were investigated. Bacterial colonies were counted on agar plates, and the count was expressed as colony-forming units per milliliter of bacterial suspension. Inactivation of bacterial cells was shown as the reduction of colony-forming units per milliliter of treated samples compared to untreated control. According to our results, when using microsecond pulses the level of inactivation increases with application of more intense electric field strengths and with number of pulses delivered. Almost 2-log reductions in bacterial counts were achieved at a field strength of 30 kV/cm with eight pulses and a 4.5-log reduction was observed at the same field strength using 48 pulses. Extending the duration of microsecond pulses from 100 to 250 μs showed no improvement in inactivation. Nanosecond pulses alone did not have any detectable effect on inactivation of E. coli regardless of the treatment time, but a significant 3-log reduction was achieved in combination with microsecond pulses.

  1. A Network Analysis of the Teachers and Graduate Students’ Research Topics in the Field of Mass Communication

    Directory of Open Access Journals (Sweden)

    Ming-Shu Yuan

    2013-06-01

    Full Text Available The completion of a master’s thesis requires the advisor’s guidance on topic selection, data collection, analysis, interpretation and writing. The advisory committee’s input also contributes to the work. This study conducted content analysis and network analysis on a sample of 547 master’s theses from eight departments of the College of Journalism and Communications of Shih Hsin University to examine the relationships between the advisors and committee members as well as the connections of research topics. The results showed that the topic “lifestyle” have attracted cross-department research interests in the college. The academic network of the college is rather loose, and serving university administration duties may have broadened a faculty member’s centrality in the network. The Department of Communications Management and the Graduate Institute of Communications served as the bridges for the inter-departmental communication in the network. One can understand the interrelations among professors and departments through study on network analysis of thesis as to identify the characteristics of each department, as well as to reveal invisible relations of academic network and scholarly communication. [Article content in Chinese

  2. Gas and Isotope Geochemistry of 81 Steam Samples from Wells in The Geysers Geothermal Field, Sonoma and Lake Counties, California

    Science.gov (United States)

    Lowenstern, Jacob B.; Janik, Cathy J.; Fahlquist, Lynne; Johnson, Linda S.

    1999-01-01

    The Geysers geothermal field in northern California, with about 2000-MW electrical capacity, is the largest geothermal field in the world. Despite its importance as a resource and as an example of a vapor-dominated reservoir, very few complete geochemical analyses of the steam have been published (Allen and Day, 1927; Truesdell and others, 1987). This report presents data from 90 steam, gas, and condensate samples from wells in The Geysers geothermal field in northern California. Samples were collected between 1978 and 1991. Well attributes include sampling date, well name, location, total depth, and the wellhead temperature and pressure at which the sample was collected. Geochemical characteristics include the steam/gas ratio, composition of noncondensable gas (relative proportions of CO2, H2S, He, H2, O2, Ar, N2, CH4, and NH3), and isotopic values for deltaD and delta18O of H2O, delta13C of CO2, and delta34S of H2S. The compilation includes 81 analyses from 74 different production wells, 9 isotopic analyses of steam condensate pumped into injection wells, and 5 complete geochemical analyses on gases from surface fumaroles and bubbling pools. Most samples were collected as saturated steam and plot along the liquid-water/steam boiling curve. Steam-togas ratios are highest in the southeastern part of the geothermal field and lowest in the northwest, consistent with other studies. Wells in the Northwest Geysers are also enriched in N2/Ar, CO2 and CH4, deltaD, and delta18O. Well discharges from the Southeast Geysers are high in steam/gas and have isotopic compositions and N2/Ar ratios consistent with recharge by local meteoric waters. Samples from the Central Geysers show characteristics found in both the Southeast and Northwest Geysers. Gas and steam characteristics of well discharges from the Northwest Geysers are consistent with input of components from a high-temperature reservoir containing carbonrich gases derived from the host Franciscan rocks. Throughout the

  3. DNA aggregation and cleavage in CGE induced by high electric field in aqueous solution accompanying electrokinetic sample injection.

    Science.gov (United States)

    Ye, Xiaoxue; Mori, Satomi; Xu, Zhongqi; Hayakawa, Shinjiro; Hirokawa, Takeshi

    2013-12-01

    The phenomenon of peak area decrease due to high injection voltage (Vinj , e.g. 10-30 kV, 200-600 V/cm in the 50 cm capillary) was found in the analysis of very dilute DNA fragments (electrokinetic supercharging-CGE. The possibility of DNA cleavage in aqueous solution was suggested, in addition to the aggregation phenomenon that is already known. The analysis of intentionally voltage-affected fragments (at 200 V/cm) also showed decreased peak areas depending on the time of the voltage being applied. Computer simulation suggested that a high electric field (a few kV/cm or more) could be generated partly between the electrode and the capillary end during electrokinetic injection (EKI) process. After thorough experimental verification, it was found that the factors affecting the damage during EKI were the magnitude of electric field, the distance between tips of electrode and capillary (De/c ), sample concentration and traveling time during EKI in sample vials. Furthermore, these factors are correlating with each other. A low conductivity of diluted sample would cause a high electric field (over a few hundred volts per centimeter), while the longer De/c results in a longer traveling time during EKI, which may cause a larger degree of damage (aggregation and cleavage) on the DNA fragments. As an important practical implication of this study, when the dilute DNA fragments (sub mg/L) are to be analyzed by CGE using EKI, injection voltage should be kept as low as possible. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Eva González-Parada

    2017-01-01

    Full Text Available Autonomous mobile nodes in mobile wireless sensor networks (MWSN allow self-deployment and self-healing. In both cases, the goals are: (i to achieve adequate coverage; and (ii to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.

  5. A Social Potential Fields Approach for Self-Deployment and Self-Healing in Hierarchical Mobile Wireless Sensor Networks.

    Science.gov (United States)

    González-Parada, Eva; Cano-García, Jose; Aguilera, Francisco; Sandoval, Francisco; Urdiales, Cristina

    2017-01-09

    Autonomous mobile nodes in mobile wireless sensor networks (MWSN) allow self-deployment and self-healing. In both cases, the goals are: (i) to achieve adequate coverage; and (ii) to extend network life. In dynamic environments, nodes may use reactive algorithms so that each node locally decides when and where to move. This paper presents a behavior-based deployment and self-healing algorithm based on the social potential fields algorithm. In the proposed algorithm, nodes are attached to low cost robots to autonomously navigate in the coverage area. The proposed algorithm has been tested in environments with and without obstacles. Our study also analyzes the differences between non-hierarchical and hierarchical routing configurations in terms of network life and coverage.

  6. Analogue network for the study of electric and magnetic fields with cylindrical symmetry

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez del Rio, C.; Santiago, S.; Verdaguer, F.

    1960-07-01

    A resistor network is described which can be used to solve the partial differential equations for the scalar potential and for the only component of the vector potential in problems with cylindrical symmetry. To calculate the values of the resistors a general method is presented valid for any equation which can be solved by the resistor network analogy. (Author) 2 refs.

  7. Exponentially-Biased Ground-State Sampling of Quantum Annealing Machines with Transverse-Field Driving Hamiltonians

    Science.gov (United States)

    Mandra, Salvatore

    2017-01-01

    We study the performance of the D-Wave 2X quantum annealing machine on systems with well-controlled ground-state degeneracy. While obtaining the ground state of a spin-glass benchmark instance represents a difficult task, the gold standard for any optimization algorithm or machine is to sample all solutions that minimize the Hamiltonian with more or less equal probability. Our results show that while naive transverse-field quantum annealing on the D-Wave 2X device can find the ground-state energy of the problems, it is not well suited in identifying all degenerate ground-state configurations associated to a particular instance. Even worse, some states are exponentially suppressed, in agreement with previous studies on toy model problems [New J. Phys. 11, 073021 (2009)]. These results suggest that more complex driving Hamiltonians are needed in future quantum annealing machines to ensure a fair sampling of the ground-state manifold.

  8. CO2, dO2/N2 and APO: observations from the Lutjewad, Mace Head and F3 platform flask sampling network

    NARCIS (Netherlands)

    Laan-Luijkx, van der I.T.; Karstens, U.; Steinbach, J.; Gerbig, C.; Sirignano, C.; Neubert, R.E.M.; Laan, van der S.; Meijer, H.A.J.

    2010-01-01

    We report results from our atmospheric flask sampling network for three European sites: Lutjewad in the Netherlands, Mace Head in Ireland and the North Sea F3 platform. The air samples from these stations are analyzed for their CO2 and O2 concentrations. In this paper we present the CO2 and O2 data

  9. CO2, δO2/N2 and APO : Observations from the Lutjewad, Mace Head and F3 platform flask sampling network

    NARCIS (Netherlands)

    Laan-Luijkx, I.T. van der; Karstens, U.; Steinbach, J.; Gerbig, C.; Sirignano, C.; Neubert, R.E.M.; Laan, S. van der; Meijer, H.A.J.

    2010-01-01

    We report results from our atmospheric flask sampling network for three European sites: Lutjewad in the Netherlands, Mace Head in Ireland and the North Sea F3 platform. The air samples from these stations are analyzed for their CO2 and O-2 concentrations. In this paper we present the CO2 and O2 data

  10. Nutrient and pesticide contamination bias estimated from field blanks collected at surface-water sites in U.S. Geological Survey Water-Quality Networks, 2002–12

    Science.gov (United States)

    Medalie, Laura; Martin, Jeffrey D.

    2017-08-14

    Potential contamination bias was estimated for 8 nutrient analytes and 40 pesticides in stream water collected by the U.S. Geological Survey at 147 stream sites from across the United States, and representing a variety of hydrologic conditions and site types, for water years 2002–12. This study updates previous U.S. Geological Survey evaluations of potential contamination bias for nutrients and pesticides. Contamination is potentially introduced to water samples by exposure to airborne gases and particulates, from inadequate cleaning of sampling or analytic equipment, and from inadvertent sources during sample collection, field processing, shipment, and laboratory analysis. Potential contamination bias, based on frequency and magnitude of detections in field blanks, is used to determine whether or under what conditions environmental data might need to be qualified for the interpretation of results in the context of comparisons with background levels, drinking-water standards, aquatic-life criteria or benchmarks, or human-health benchmarks. Environmental samples for which contamination bias as determined in this report applies are those from historical U.S. Geological Survey water-quality networks or programs that were collected during the same time frame and according to the same protocols and that were analyzed in the same laboratory as field blanks described in this report.Results from field blanks for ammonia, nitrite, nitrite plus nitrate, orthophosphate, and total phosphorus were partitioned by analytical method; results from the most commonly used analytical method for total phosphorus were further partitioned by date. Depending on the analytical method, 3.8, 9.2, or 26.9 percent of environmental samples, the last of these percentages pertaining to all results from 2007 through 2012, were potentially affected by ammonia contamination. Nitrite contamination potentially affected up to 2.6 percent of environmental samples collected between 2002 and 2006 and

  11. Modeling Root Length Density of Field Grown Potatoes under Different Irrigation Strategies and Soil Textures Using Artificial Neural Networks

    DEFF Research Database (Denmark)

    Ahmadi, Seyed Hamid; Sepaskhah, A R; Andersen, Mathias Neumann

    2014-01-01

    Root length density (RLD) is a highly wanted parameter for use in crop growth modeling but difficult to measure under field conditions. Therefore, artificial neural networks (ANNs) were implemented to predict the RLD of field grown potatoes that were subject to three irrigation strategies and three...... soil textures with different soil water status and soil densities. The objectives of the study were to test whether soil textural information, soil water status, and soil density might be used by ANN to simulate RLD at harvest. In the study 63 data pairs were divided into data sets of training (80......% of the data) and testing (20% of the data). A feed forward three-layer perceptron network and the sigmoid, hyperbolic tangent, and linear transfer functions were used for the ANN modeling. The RLDs (target variable) in different soil layers were predicted by nine ANNs representing combinations (models...

  12. Sensitive enantioanalysis of β-blockers via field-amplified sample injection combined with water removal in microemulsion electrokinetic chromatography.

    Science.gov (United States)

    Ma, Yanhua; Zhang, Huige; Rahman, Zia Ur; Wang, Weifeng; Li, Xi; Chen, Hongli; Chen, Xingguo

    2014-10-01

    In this study, an on-line sample preconcentration technique, field-amplified sample injection combined with water removal by electroosmotic flow (EOF) pump, was applied to realize a highly sensitive chiral analysis of β-blocker enantiomers by MEEKC. The introduction of a water plug in capillary before the electrokinetic injection provided the effective preconcentration of chiral compounds. And then the water was moving out of the column from the injection end under the effect of the EOF, which avoided dilution of the stacked β-blocker enantiomers concentration suffering from the presence of water in separation buffer. Moreover, the addition of H3 PO4 and methanol in the sample solution greatly improved the enhancement efficiency further. Under optimized conditions, more than 2700-fold enhancement in sensitivity was obtained for each enantiomer of bupranolol (BU), alprenolol (AL), and propranolol (PRO) via electrokinetic injection. LODs were 0.10, 0.10, 0.12, 0.11, 0.02, and 0.02 ng/mL for S-BU, R-BU, S-AL, R-AL, S-PRO, and R-PRO, respectively. Eventually, the proposed method was successfully applied to the determination of BU, AL, and PRO in serum samples with good recoveries ranging from 93.4 to 98.2%. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Fuzzy logic scheme for tip-sample distance control for a low cost near field optical microscope

    Directory of Open Access Journals (Sweden)

    J.A. Márquez

    2013-12-01

    Full Text Available The control of the distance between the surface and the tip-sample of a Scanning Near Field Optical Microscope (SNOM is essential for a reliable surface mapping. The control algorithm should be able to maintain the system in a constant distance between the tip and the surface. In this system, nanometric adjustments should be made in order to sense topographies at the same scale with an appropriate resolution. These kinds of devices varies its properties through short periods of time, and it is required a control algorithm capable of handle these changes. In this work a fuzzy logic control scheme is proposed in order to manage the changes the device might have through the time, and to counter the effects of the non-linearity as well. Two inputs are used to program the rules inside the fuzzy logic controller, the difference between the reference signal and the sample signal (error, and the speed in which it decreases or increases. A lock-in amplifier is used as data acquisition hardware to sample the high frequency signals used to produce the tuning fork oscillations. Once these variables are read the control algorithm calculate a voltage output to move the piezoelectric device, approaching or removing the tip-probe from the sample analyzed.

  14. Influence of high-conductivity buffer composition on field-enhanced sample injection coupled to sweeping in CE.

    Science.gov (United States)

    Anres, Philippe; Delaunay, Nathalie; Vial, Jérôme; Thormann, Wolfgang; Gareil, Pierre

    2013-02-01

    The aim of this work was to clarify the mechanism taking place in field-enhanced sample injection coupled to sweeping and micellar EKC (FESI-Sweep-MEKC), with the utilization of two acidic high-conductivity buffers (HCBs), phosphoric acid or sodium phosphate buffer, in view of maximizing sensitivity enhancements. Using cationic model compounds in acidic media, a chemometric approach and simulations with SIMUL5 were implemented. Experimental design first enabled to identify the significant factors and their potential interactions. Simulation demonstrates the formation of moving boundaries during sample injection, which originate at the initial sample/HCB and HCB/buffer discontinuities and gradually change the compositions of HCB and BGE. With sodium phosphate buffer, the HCB conductivity increased during the injection, leading to a more efficient preconcentration by staking (about 1.6 times) than with phosphoric acid alone, for which conductivity decreased during injection. For the same injection time at constant voltage, however, a lower amount of analytes was injected with sodium phosphate buffer than with phosphoric acid. Consequently sensitivity enhancements were lower for the whole FESI-Sweep-MEKC process. This is why, in order to maximize sensitivity enhancements, it is proposed to work with sodium phosphate buffer as HCB and to use constant current during sample injection. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. High Field In Vivo 13C Magnetic Resonance Spectroscopy of Brain by Random Radiofrequency Heteronuclear Decoupling and Data Sampling

    Science.gov (United States)

    Li, Ningzhi; Li, Shizhe; Shen, Jun

    2017-06-01

    In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (Drug Administration (FDA). Alternately, carboxylic/amide carbons are coupled to protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.

  16. Urban exposure to ELF magnetic field due to high-, medium- and low-voltage electricity supply networks.

    Science.gov (United States)

    Bottura, V; Cappio Borlino, M; Carta, N; Cerise, L; Imperial, E

    2009-12-01

    The regional environment protection agency (ARPA) of the Aosta Valley region in north Italy performed a survey of magnetic field triggered by the power supply network in high, medium and low voltages on the entire area of Aosta town. The electrical distribution system for houses was not however taken into account. The aim of the survey was to evaluate the global population exposure and not simply the assessment of the legal exposure limit compliance.

  17. COMPARISON OF VIRTUAL FIELDS METHOD, PARALLEL NETWORK MATERIAL MODEL AND FINITE ELEMENT UPDATING FOR MATERIAL PARAMETER DETERMINATION

    Directory of Open Access Journals (Sweden)

    Florian Dirisamer

    2016-12-01

    Full Text Available Extracting material parameters from test specimens is very intensive in terms of cost and time, especially for viscoelastic material models, where the parameters are dependent of time (frequency, temperature and environmental conditions. Therefore, three different methods for extracting these parameters were tested. Firstly, digital image correlation combined with virtual fields method, secondly, a parallel network material model and thirdly, finite element updating. These three methods are shown and the results are compared in terms of accuracy and experimental effort.

  18. Comparison of CFBP, FFBP, and RBF Networks in the Field of Crack Detection

    Directory of Open Access Journals (Sweden)

    Dhirendranath Thatoi

    2014-01-01

    Full Text Available The issue of crack detection and its diagnosis has gained a wide spread of industrial interest. The crack/damage affects the industrial economic growth. So early crack detection is an important aspect in the point of view of any industrial growth. In this paper a design tool ANSYS is used to monitor various changes in vibrational characteristics of thin transverse cracks on a cantilever beam for detecting the crack position and depth and was compared using artificial intelligence techniques. The usage of neural networks is the key point of development in this paper. The three neural networks used are cascade forward back propagation (CFBP network, feed forward back propagation (FFBP network, and radial basis function (RBF network. In the first phase of this paper theoretical analysis has been made and then the finite element analysis has been carried out using commercial software, ANSYS. In the second phase of this paper the neural networks are trained using the values obtained from a simulated model of the actual cantilever beam using ANSYS. At the last phase a comparative study has been made between the data obtained from neural network technique and finite element analysis.

  19. Effects of chlorinated drinking water on the xenobiotic metabolism in Cyprinus carpio treated with samples from two Italian municipal networks.

    Science.gov (United States)

    Cirillo, Silvia; Canistro, Donatella; Vivarelli, Fabio; Paolini, Moreno

    2016-09-01

    Drinking water (DW) disinfection represents a milestone of the past century, thanks to its efficacy in the reduction of risks of epidemic forms by water micro-organisms. Nevertheless, such process generates disinfection by-products (DBPs), some of which are genotoxic both in animals and in humans and carcinogenic in animals. At present, chlorination is one of the most employed strategies but the toxicological effects of several classes of DBPs are unknown. In this investigation, a multidisciplinary approach foreseeing the chemical analysis of chlorinated DW samples and the study of its effects on mixed function oxidases (MFOs) belonging to the superfamily of cytochrome P450-linked monooxygenases of Cyprinus carpio hepatopancreas, was employed. The experimental samples derived from aquifers of two Italian towns (plant 1, river water and plant 2, spring water) were obtained immediately after the disinfection (A) and along the network (R1). Animals treated with plant 1 DW-processed fractions showed a general CYP-associated MFO induction. By contrast, in plant 2, a complex modulation pattern was achieved, with a general up-regulation for the point A and a marked MFO inactivation in the R1 group, particularly for the testosterone metabolism. Together, the toxicity and co-carcinogenicity (i.e. unremitting over-generation of free radicals and increased bioactivation capability) of DW linked to the recorded metabolic manipulation, suggests that a prolonged exposure to chlorine-derived disinfectants may produce adverse health effects.

  20. Partial Least Squares and Neural Networks for Quantitative Calibration of Laser-induced Breakdown Spectroscopy (LIBs) of Geologic Samples

    Science.gov (United States)

    Anderson, R. B.; Morris, Richard V.; Clegg, S. M.; Humphries, S. D.; Wiens, R. C.; Bell, J. F., III; Mertzman, S. A.

    2010-01-01

    The ChemCam instrument [1] on the Mars Science Laboratory (MSL) rover will be used to obtain the chemical composition of surface targets within 7 m of the rover using Laser Induced Breakdown Spectroscopy (LIBS). ChemCam analyzes atomic emission spectra (240-800 nm) from a plasma created by a pulsed Nd:KGW 1067 nm laser. The LIBS spectra can be used in a semiquantitative way to rapidly classify targets (e.g., basalt, andesite, carbonate, sulfate, etc.) and in a quantitative way to estimate their major and minor element chemical compositions. Quantitative chemical analysis from LIBS spectra is complicated by a number of factors, including chemical matrix effects [2]. Recent work has shown promising results using multivariate techniques such as partial least squares (PLS) regression and artificial neural networks (ANN) to predict elemental abundances in samples [e.g. 2-6]. To develop, refine, and evaluate analysis schemes for LIBS spectra of geologic materials, we collected spectra of a diverse set of well-characterized natural geologic samples and are comparing the predictive abilities of PLS, cascade correlation ANN (CC-ANN) and multilayer perceptron ANN (MLP-ANN) analysis procedures.

  1. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer.

    Science.gov (United States)

    Douglass, John K; Wehling, Martin F

    2016-12-01

    A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution.

  2. Okefenokee National Wildlife Refuge Nightjar Survey Network Survey Field Procedures and Completed Data Sheets

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Raw data and survey instructions from the Nightjar Survey Network's nighjar survey on Okefenokee NWR. Nightjar Surveys are standardized population counts conducted...

  3. Low and High-Frequency Field Potentials of Cortical Networks Exhibit Distinct Responses to Chemicals

    Science.gov (United States)

    Neural networks grown on microelectrode arrays (MEAs) have become an important, high content in vitro assay for assessing neuronal function. MEA experiments typically examine high- frequency (HF) (>200 Hz) spikes, and bursts which can be used to discriminate between differ...

  4. First Transmitted Hyperspectral Light Measurements and Cloud Properties from Recent Field Campaign Sampling Clouds Under Biomass Burning Aerosol

    Science.gov (United States)

    Leblanc, S.; Redemann, Jens; Shinozuka, Yohei; Flynn, Connor J.; Segal Rozenhaimer, Michal; Kacenelenbogen, Meloe Shenandoah; Pistone, Kristina Marie Myers; Schmidt, Sebastian; Cochrane, Sabrina

    2016-01-01

    We present a first view of data collected during a recent field campaign aimed at measuring biomass burning aerosol above clouds from airborne platforms. The NASA ObseRvations of CLouds above Aerosols and their intEractionS (ORACLES) field campaign recently concluded its first deployment sampling clouds and overlying aerosol layer from the airborne platform NASA P3. We present results from the Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research (4STAR), in conjunction with the Solar Spectral Flux Radiometers (SSFR). During this deployment, 4STAR sampled transmitted solar light either via direct solar beam measurements and scattered light measurements, enabling the measurement of aerosol optical thickness and the retrieval of information on aerosol particles in addition to overlying cloud properties. We focus on the zenith-viewing scattered light measurements, which are used to retrieve cloud optical thickness, effective radius, and thermodynamic phase of clouds under a biomass burning layer. The biomass burning aerosol layer present above the clouds is the cause of potential bias in retrieved cloud optical depth and effective radius from satellites. We contrast the typical reflection based approach used by satellites to the transmission based approach used by 4STAR during ORACLES for retrieving cloud properties. It is suspected that these differing approaches will yield a change in retrieved properties since light transmitted through clouds is sensitive to a different cloud volume than reflected light at cloud top. We offer a preliminary view of the implications of these differences in sampling volumes to the calculation of cloud radiative effects (CRE).

  5. A FLUX-LIMITED SAMPLE OF z {approx} 1 Ly{alpha} EMITTING GALAXIES IN THE CHANDRA DEEP FIELD SOUTH ,

    Energy Technology Data Exchange (ETDEWEB)

    Barger, A. J.; Wold, I. G. B. [Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706 (United States); Cowie, L. L. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States)

    2012-04-20

    We describe a method for obtaining a flux-limited sample of Ly{alpha} emitters from Galaxy Evolution Explorer (GALEX) grism data. We show that the multiple GALEX grism images can be converted into a three-dimensional (two spatial axes and one wavelength axis) data cube. The wavelength slices may then be treated as narrowband images and searched for emission-line galaxies. For the GALEX NUV grism data, the method provides a Ly{alpha} flux-limited sample over the redshift range z = 0.67-1.16. We test the method on the Chandra Deep Field South field, where we find 28 Ly{alpha} emitters with faint continuum magnitudes (NUV > 22) that are not present in the GALEX pipeline sample. We measure the completeness by adding artificial emitters and measuring the fraction recovered. We find that we have an 80% completeness above a Ly{alpha} flux of 10{sup -15} erg cm{sup -2} s{sup -1}. We use the UV spectra and the available X-ray data and optical spectra to estimate the fraction of active galactic nuclei in the selection. We report the first detection of a giant Ly{alpha} blob at z < 1, though we find that these objects are much less common at z = 1 than at z = 3. Finally, we compute limits on the z {approx} 1 Ly{alpha} luminosity function and confirm that there is a dramatic evolution in the luminosity function over the redshift range z = 0-1.

  6. SU-F-E-09: Respiratory Signal Prediction Based On Multi-Layer Perceptron Neural Network Using Adjustable Training Samples

    Energy Technology Data Exchange (ETDEWEB)

    Sun, W; Jiang, M; Yin, F [Duke University Medical Center, Durham, NC (United States)

    2016-06-15

    Purpose: Dynamic tracking of moving organs, such as lung and liver tumors, under radiation therapy requires prediction of organ motions prior to delivery. The shift of moving organ may change a lot due to huge transform of respiration at different periods. This study aims to reduce the influence of that changes using adjustable training signals and multi-layer perceptron neural network (ASMLP). Methods: Respiratory signals obtained using a Real-time Position Management(RPM) device were used for this study. The ASMLP uses two multi-layer perceptron neural networks(MLPs) to infer respiration position alternately and the training sample will be updated with time. Firstly, a Savitzky-Golay finite impulse response smoothing filter was established to smooth the respiratory signal. Secondly, two same MLPs were developed to estimate respiratory position from its previous positions separately. Weights and thresholds were updated to minimize network errors according to Leverberg-Marquart optimization algorithm through backward propagation method. Finally, MLP 1 was used to predict 120∼150s respiration position using 0∼120s training signals. At the same time, MLP 2 was trained using 30∼150s training signals. Then MLP is used to predict 150∼180s training signals according to 30∼150s training signals. The respiration position is predicted as this way until it was finished. Results: In this experiment, the two methods were used to predict 2.5 minute respiratory signals. For predicting 1s ahead of response time, correlation coefficient was improved from 0.8250(MLP method) to 0.8856(ASMLP method). Besides, a 30% improvement of mean absolute error between MLP(0.1798 on average) and ASMLP(0.1267 on average) was achieved. For predicting 2s ahead of response time, correlation coefficient was improved from 0.61415 to 0.7098.Mean absolute error of MLP method(0.3111 on average) was reduced by 35% using ASMLP method(0.2020 on average). Conclusion: The preliminary results

  7. Preliminary Evaluation of the Field and Laboratory Emission Cell (FLEC) for Sampling Attribution Signatures from Building Materials

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, Scott D.; He, Lijian; Wahl, Jon H.

    2012-08-30

    This study provides a preliminary evaluation of the Field and Laboratory Emission Cell (FLEC) for its suitability for sampling building materials for toxic compounds and their associated impurities and residues that might remain after a terrorist chemical attack. Chemical warfare (CW) agents and toxic industrial chemicals were represented by a range of test probes that included CW surrogates. The test probes encompassed the acid-base properties, volatilities, and polarities of the expected chemical agents and residual compounds. Results indicated that dissipation of the test probes depended heavily on the underlying material. Near complete dissipation of almost all test probes occurred from galvanized stainless steel within 3.0 hrs, whereas far stronger retention with concomitant slower release was observed for vinyl composition floor tiles. The test probes displayed immediated permanence on Teflon. FLEC sampling was further evaluated by profiling residues remaining after the evaporation of 2-chloroethyl ethyl sulfide, a sulfur mustard simulant. This study lays the groundwork for the eventual goal of applying this sampling approach for collection of forensic attribution signatures that remain after a terrorist chemical attack.

  8. The psychometric properties of the personality inventory for DSM-5 in an APA DSM-5 field trial sample.

    Science.gov (United States)

    Quilty, Lena C; Ayearst, Lindsay; Chmielewski, Michael; Pollock, Bruce G; Bagby, R Michael

    2013-06-01

    Section 3 of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) includes a hybrid model of personality pathology, in which dimensional personality traits are used to derive one of seven categorical personality disorder diagnoses. The Personality Inventory for DSM-5 (PID-5) was developed by the DSM-5 Personality and Personality Disorders workgroup and their consultants to produce a freely available instrument to assess the personality traits within this new system. To date, the psychometric properties of the PID-5 have been evaluated primarily in undergraduate student and community adult samples. In the current investigation, we extend this line of research to a psychiatric patient sample who participated in the APA DSM-5 Field Trial (Centre for Addiction and Mental Health site). A total of 201 psychiatric patients (102 men, 99 women) completed the PID-5 and the Revised NEO Personality Inventory (NEO PI-R). The internal consistencies of the PID-5 domain and facet trait scales were acceptable. Results supported the unidimensional structure of all trait scales but one, and the convergence between the PID-5 and analogous NEO PI-R scales. Evidence for discriminant validity was mixed. Overall, the current investigation provides support for the psychometric properties of this diagnostic instrument in psychiatric samples.

  9. Network Approach to Understanding Emotion Dynamics in Relation to Childhood Trauma and Genetic Liability to Psychopathology: Replication of a Prospective Experience Sampling Analysis

    Science.gov (United States)

    Hasmi, Laila; Drukker, Marjan; Guloksuz, Sinan; Menne-Lothmann, Claudia; Decoster, Jeroen; van Winkel, Ruud; Collip, Dina; Delespaul, Philippe; De Hert, Marc; Derom, Catherine; Thiery, Evert; Jacobs, Nele; Rutten, Bart P. F.; Wichers, Marieke; van Os, Jim

    2017-01-01

    Background: The network analysis of intensive time series data collected using the Experience Sampling Method (ESM) may provide vital information in gaining insight into the link between emotion regulation and vulnerability to psychopathology. The aim of this study was to apply the network approach to investigate whether genetic liability (GL) to psychopathology and childhood trauma (CT) are associated with the network structure of the emotions “cheerful,” “insecure,” “relaxed,” “anxious,” “irritated,” and “down”—collected using the ESM method. Methods: Using data from a population-based sample of twin pairs and siblings (704 individuals), we examined whether momentary emotion network structures differed across strata of CT and GL. GL was determined empirically using the level of psychopathology in monozygotic and dizygotic co-twins. Network models were generated using multilevel time-lagged regression analysis and were compared across three strata (low, medium, and high) of CT and GL, respectively. Permutations were utilized to calculate p values and compare regressions coefficients, density, and centrality indices. Regression coefficients were presented as connections, while variables represented the nodes in the network. Results: In comparison to the low GL stratum, the high GL stratum had significantly denser overall (p = 0.018) and negative affect network density (p emotions. The present finding partially replicates an earlier analysis, suggesting it may be instructive to model negative emotional dynamics as a function of genetic influence. PMID:29163289

  10. Assessment of foetal exposure to the homogeneous magnetic field harmonic spectrum generated by electricity transmission and distribution networks.

    Science.gov (United States)

    Fiocchi, Serena; Liorni, Ilaria; Parazzini, Marta; Ravazzani, Paolo

    2015-04-01

    During the last decades studies addressing the effects of exposure to Extremely Low Frequency Electromagnetic Fields (ELF-EMF) have pointed out a possible link between those fields emitted by power lines and childhood leukaemia. They have also stressed the importance of also including in the assessment the contribution of frequency components, namely harmonics, other than the fundamental one. Based on the spectrum of supply voltage networks allowed by the European standard for electricity quality assessment, in this study the exposure of high-resolution three-dimensional models of foetuses to the whole harmonic content of a uniform magnetic field with a fundamental frequency of 50 Hz, was assessed. The results show that the main contribution in terms of induced electric fields to the foetal exposure is given by the fundamental frequency component. The harmonic components add some contributions to the overall level of electric fields, however, due to the extremely low permitted amplitude of the harmonic components with respect to the fundamental, their amplitudes are low. The level of the induced electric field is also much lower than the limits suggested by the guidelines for general public exposure, when the amplitude of the incident magnetic field is set at the maximum permitted level.

  11. Assessment of Foetal Exposure to the Homogeneous Magnetic Field Harmonic Spectrum Generated by Electricity Transmission and Distribution Networks

    Directory of Open Access Journals (Sweden)

    Serena Fiocchi

    2015-04-01

    Full Text Available During the last decades studies addressing the effects of exposure to Extremely Low Frequency Electromagnetic Fields (ELF-EMF have pointed out a possible link between those fields emitted by power lines and childhood leukaemia. They have also stressed the importance of also including in the assessment the contribution of frequency components, namely harmonics, other than the fundamental one. Based on the spectrum of supply voltage networks allowed by the European standard for electricity quality assessment, in this study the exposure of high-resolution three-dimensional models of foetuses to the whole harmonic content of a uniform magnetic field with a fundamental frequency of 50 Hz, was assessed. The results show that the main contribution in terms of induced electric fields to the foetal exposure is given by the fundamental frequency component. The harmonic components add some contributions to the overall level of electric fields, however, due to the extremely low permitted amplitude of the harmonic components with respect to the fundamental, their amplitudes are low. The level of the induced electric field is also much lower than the limits suggested by the guidelines for general public exposure, when the amplitude of the incident magnetic field is set at the maximum permitted level.

  12. The modeling of attraction characteristics regarding passenger flow in urban rail transit network based on field theory.

    Science.gov (United States)

    Li, Man; Wang, Yanhui; Jia, Limin

    2017-01-01

    Aimed at the complicated problems of attraction characteristics regarding passenger flow in urban rail transit network, the concept of the gravity field of passenger flow is proposed in this paper. We establish the computation methods of field strength and potential energy to reveal the potential attraction relationship among stations from the perspective of the collection and distribution of passenger flow and the topology of network. As for the computation methods of field strength, an optimum path concept is proposed to define betweenness centrality parameter. Regarding the computation of potential energy, Compound Simpson's Rule Formula is applied to get a solution to the function. Taking No. 10 Beijing Subway as a practical example, an analysis of simulation and verification is conducted, and the results shows in the following ways. Firstly, the bigger field strength value between two stations is, the stronger passenger flow attraction is, and the greater probability of the formation of the largest passenger flow of section is. Secondly, there is the greatest passenger flow volume and circulation capacity between two zones of high potential energy.

  13. The modeling of attraction characteristics regarding passenger flow in urban rail transit network based on field theory.

    Directory of Open Access Journals (Sweden)

    Man Li

    Full Text Available Aimed at the complicated problems of attraction characteristics regarding passenger flow in urban rail transit network, the concept of the gravity field of passenger flow is proposed in this paper. We establish the computation methods of field strength and potential energy to reveal the potential attraction relationship among stations from the perspective of the collection and distribution of passenger flow and the topology of network. As for the computation methods of field strength, an optimum path concept is proposed to define betweenness centrality parameter. Regarding the computation of potential energy, Compound Simpson's Rule Formula is applied to get a solution to the function. Taking No. 10 Beijing Subway as a practical example, an analysis of simulation and verification is conducted, and the results shows in the following ways. Firstly, the bigger field strength value between two stations is, the stronger passenger flow attraction is, and the greater probability of the formation of the largest passenger flow of section is. Secondly, there is the greatest passenger flow volume and circulation capacity between two zones of high potential energy.

  14. Application of a series of artificial neural networks to on-site quantitative analysis of lead into real soil samples by laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    El Haddad, J. [Univ. Bordeaux, LOMA, CNRS UMR 5798, F-33400 Talence (France); Bruyère, D. [BRGM, Service Métrologie, Monitoring et Analyse, 3 av. C. Guillemin, B.P 36009, 45060 Orléans Cedex (France); Ismaël, A.; Gallou, G. [IVEA Solution, Centre Scientifique d' Orsay, Bât 503, 91400 Orsay (France); Laperche, V.; Michel, K. [BRGM, Service Métrologie, Monitoring et Analyse, 3 av. C. Guillemin, B.P 36009, 45060 Orléans Cedex (France); Canioni, L. [Univ. Bordeaux, LOMA, CNRS UMR 5798, F-33400 Talence (France); Bousquet, B., E-mail: bruno.bousquet@u-bordeaux.fr [Univ. Bordeaux, LOMA, CNRS UMR 5798, F-33400 Talence (France)

    2014-07-01

    Artificial neural networks were applied to process data from on-site LIBS analysis of soil samples. A first artificial neural network allowed retrieving the relative amounts of silicate, calcareous and ores matrices into soils. As a consequence, each soil sample was correctly located inside the ternary diagram characterized by these three matrices, as verified by ICP-AES. Then a series of artificial neural networks were applied to quantify lead into soil samples. More precisely, two models were designed for classification purpose according to both the type of matrix and the range of lead concentrations. Then, three quantitative models were locally applied to three data subsets. This complete approach allowed reaching a relative error of prediction close to 20%, considered as satisfying in the case of on-site analysis. - Highlights: • Application of a series of artificial neural networks (ANN) to quantitative LIBS • Matrix-based classification of the soil samples by ANN • Concentration-based classification of the soil samples by ANN • Series of quantitative ANN models dedicated to the analysis of data subsets • Relative error of prediction lower than 20% for LIBS analysis of soil samples.

  15. Dynamic solid phase microextraction for sampling of airborne sarin with gas chromatography-mass spectrometry for rapid field detection and quantification.

    Science.gov (United States)

    Hook, Gary L; Jackson Lepage, Carmela; Miller, Stephen I; Smith, Philip A

    2004-08-01

    A portable dynamic air sampler and solid phase microextraction were used to simultaneously detect, identify, and quantify airborne sarin with immediate analysis of samples using a field portable gas chromatography-mass spectrometry system. A mathematical model was used with knowledge of the mass of sarin trapped, linear air velocity past the exposed sampling fiber, and sample duration allowing calculation of concentration estimates. For organizations with suitable field portable instrumentation, these methods are potentially useful for rapid onsite detection and quantification of high concern analytes, either through direct environmental sampling or through sampling of air collected in bags.

  16. Development and field validation of a community-engaged particulate matter air quality monitoring network in Imperial, California, USA.

    Science.gov (United States)

    Carvlin, Graeme N; Lugo, Humberto; Olmedo, Luis; Bejarano, Ester; Wilkie, Alexa; Meltzer, Dan; Wong, Michelle; King, Galatea; Northcross, Amanda; Jerrett, Michael; English, Paul B; Hammond, Donald; Seto, Edmund

    2017-12-01

    The Imperial County Community Air Monitoring Network was developed as part of a community-engaged research study to provide real-time particulate matter (PM) air quality information at a high spatial resolution in Imperial County, California. The network augmented the few existing regulatory monitors and increased monitoring near susceptible populations. Monitors were both calibrated and field validated, a key component of evaluating the quality of the data produced by the community monitoring network. This paper examines the performance of a customized version of the low-cost Dylos optical particle counter used in the community air monitors compared with both PM 2.5 and PM 10 (particulate matter with aerodynamic diameters monitors (BAMs) and federal reference method (FRM) gravimetric filters at a collocation site in the study area. A conversion equation was developed that estimates particle mass concentrations from the native Dylos particle counts, taking into account relative humidity. The R 2 for converted hourly averaged Dylos mass measurements versus a PM 2.5 BAM was 0.79 and that versus a PM 10 BAM was 0.78. The performance of the conversion equation was evaluated at six other sites with collocated PM 2.5 environmental beta-attenuation monitors (EBAMs) located throughout Imperial County. The agreement of the Dylos with the EBAMs was moderate to high (R 2 = 0.35-0.81). The performance of low-cost air quality sensors in community networks is currently not well documented. This paper provides a methodology for quantifying the performance of a next-generation Dylos PM sensor used in the Imperial County Community Air Monitoring Network. This air quality network provides data at a much finer spatial and temporal resolution than has previously been possible with government monitoring efforts. Once calibrated and validated, these high-resolution data may provide more information on susceptible populations, assist in the identification of air pollution hotspots, and

  17. Integrating silicon nanowire field effect transistor, microfluidics and air sampling techniques for real-time monitoring biological aerosols.

    Science.gov (United States)

    Shen, Fangxia; Tan, Miaomiao; Wang, Zhenxing; Yao, Maosheng; Xu, Zhenqiang; Wu, Yan; Wang, Jindong; Guo, Xuefeng; Zhu, Tong

    2011-09-01

    Numerous threats from biological aerosol exposures, such as those from H1N1 influenza, SARS, bird flu, and bioterrorism activities necessitate the development of a real-time bioaerosol sensing system, which however is a long-standing challenge in the field. Here, we developed a real-time monitoring system for airborne influenza H3N2 viruses by integrating electronically addressable silicon nanowire (SiNW) sensor devices, microfluidics and bioaerosol-to-hydrosol air sampling techniques. When airborne influenza H3N2 virus samples were collected and delivered to antibody-modified SiNW devices, discrete nanowire conductance changes were observed within seconds. In contrast, the conductance levels remained relatively unchanged when indoor air or clean air samples were delivered. A 10-fold increase in virus concentration was found to give rise to about 20-30% increase in the sensor response. The selectivity of the sensing device was successfully demonstrated using H1N1 viruses and house dust allergens. From the simulated aerosol release to the detection, we observed a time scale of 1-2 min. Quantitative polymerase chain reaction (qPCR) tests revealed that higher virus concentrations in the air samples generally corresponded to higher conductance levels in the SiNW devices. In addition, the display of detection data on remote platforms such as cell phone and computer was also successfully demonstrated with a wireless module. The work here is expected to lead to innovative methods for biological aerosol monitoring, and further improvements in each of the integrated elements could extend the system to real world applications.

  18. Lateral flow immunoassay for on-site detection of Xanthomonas arboricola pv. pruni in symptomatic field samples

    Science.gov (United States)

    López-Soriano, Pablo; Noguera, Patricia; Gorris, María Teresa; Puchades, Rosa; Maquieira, Ángel; Marco-Noales, Ester; López, María M.

    2017-01-01

    Xanthomonas arboricola pv. pruni is a quarantine pathogen and the causal agent of the bacterial spot disease of stone fruits and almond, a major threat to Prunus species. Rapid and specific detection methods are essential to improve disease management, and therefore a prototype of a lateral flow immunoassay (LFIA) was designed for the detection of X. arboricola pv. pruni in symptomatic field samples. It was developed by producing polyclonal antibodies which were then combined with carbon nanoparticles and assembled on nitrocellulose strips. The specificity of the LFIA was tested against 87 X. arboricola pv. pruni strains from different countries worldwide, 47 strains of other Xanthomonas species and 14 strains representing other bacterial genera. All X. arboricola pv. pruni strains were detected and cross-reactions were observed only with four strains of X. arboricola pv. corylina, a hazelnut pathogen that does not share habitat with X. arboricola pv. pruni. The sensitivity of the LFIA was assessed with suspensions from pure cultures of three X. arboricola pv. pruni strains and with spiked leaf extracts prepared from four hosts inoculated with this pathogen (almond, apricot, Japanese plum and peach). The limit of detection observed with both pure cultures and spiked samples was 104 CFU ml-1. To demonstrate the accuracy of the test, 205 samples naturally infected with X. arboricola pv. pruni and 113 samples collected from healthy plants of several different Prunus species were analyzed with the LFIA. Results were compared with those obtained by plate isolation and real time PCR and a high correlation was found among techniques. Therefore, we propose this LFIA as a screening tool that allows a rapid and reliable diagnosis of X. arboricola pv. pruni in symptomatic plants. PMID:28448536

  19. Lateral flow immunoassay for on-site detection of Xanthomonas arboricola pv. pruni in symptomatic field samples.

    Science.gov (United States)

    López-Soriano, Pablo; Noguera, Patricia; Gorris, María Teresa; Puchades, Rosa; Maquieira, Ángel; Marco-Noales, Ester; López, María M

    2017-01-01

    Xanthomonas arboricola pv. pruni is a quarantine pathogen and the causal agent of the bacterial spot disease of stone fruits and almond, a major threat to Prunus species. Rapid and specific detection methods are essential to improve disease management, and therefore a prototype of a lateral flow immunoassay (LFIA) was designed for the detection of X. arboricola pv. pruni in symptomatic field samples. It was developed by producing polyclonal antibodies which were then combined with carbon nanoparticles and assembled on nitrocellulose strips. The specificity of the LFIA was tested against 87 X. arboricola pv. pruni strains from different countries worldwide, 47 strains of other Xanthomonas species and 14 strains representing other bacterial genera. All X. arboricola pv. pruni strains were detected and cross-reactions were observed only with four strains of X. arboricola pv. corylina, a hazelnut pathogen that does not share habitat with X. arboricola pv. pruni. The sensitivity of the LFIA was assessed with suspensions from pure cultures of three X. arboricola pv. pruni strains and with spiked leaf extracts prepared from four hosts inoculated with this pathogen (almond, apricot, Japanese plum and peach). The limit of detection observed with both pure cultures and spiked samples was 104 CFU ml-1. To demonstrate the accuracy of the test, 205 samples naturally infected with X. arboricola pv. pruni and 113 samples collected from healthy plants of several different Prunus species were analyzed with the LFIA. Results were compared with those obtained by plate isolation and real time PCR and a high correlation was found among techniques. Therefore, we propose this LFIA as a screening tool that allows a rapid and reliable diagnosis of X. arboricola pv. pruni in symptomatic plants.

  20. Lateral flow immunoassay for on-site detection of Xanthomonas arboricola pv. pruni in symptomatic field samples.

    Directory of Open Access Journals (Sweden)

    Pablo López-Soriano

    Full Text Available Xanthomonas arboricola pv. pruni is a quarantine pathogen and the causal agent of the bacterial spot disease of stone fruits and almond, a major threat to Prunus species. Rapid and specific detection methods are essential to improve disease management, and therefore a prototype of a lateral flow immunoassay (LFIA was designed for the detection of X. arboricola pv. pruni in symptomatic field samples. It was developed by producing polyclonal antibodies which were then combined with carbon nanoparticles and assembled on nitrocellulose strips. The specificity of the LFIA was tested against 87 X. arboricola pv. pruni strains from different countries worldwide, 47 strains of other Xanthomonas species and 14 strains representing other bacterial genera. All X. arboricola pv. pruni strains were detected and cross-reactions were observed only with four strains of X. arboricola pv. corylina, a hazelnut pathogen that does not share habitat with X. arboricola pv. pruni. The sensitivity of the LFIA was assessed with suspensions from pure cultures of three X. arboricola pv. pruni strains and with spiked leaf extracts prepared from four hosts inoculated with this pathogen (almond, apricot, Japanese plum and peach. The limit of detection observed with both pure cultures and spiked samples was 104 CFU ml-1. To demonstrate the accuracy of the test, 205 samples naturally infected with X. arboricola pv. pruni and 113 samples collected from healthy plants of several different Prunus species were analyzed with the LFIA. Results were compared with those obtained by plate isolation and real time PCR and a high correlation was found among techniques. Therefore, we propose this LFIA as a screening tool that allows a rapid and reliable diagnosis of X. arboricola pv. pruni in symptomatic plants.

  1. Mean-field description and propagation of chaos in networks of Hodgkin-Huxley and FitzHugh-Nagumo neurons

    Science.gov (United States)

    2012-01-01

    We derive the mean-field equations arising as the limit of a network of interacting spiking neurons, as the number of neurons goes to infinity. The neurons belong to a fixed number of populations and are represented either by the Hodgkin-Huxley model or by one of its simplified version, the FitzHugh-Nagumo model. The synapses between neurons are either electrical or chemical. The network is assumed to be fully connected. The maximum conductances vary randomly. Under the condition that all neurons’ initial conditions are drawn independently from the same law that depends only on the population they belong to, we prove that a propagation of chaos phenomenon takes place, namely that in the mean-field limit, any finite number of neurons become independent and, within each population, have the same probability distribution. This probability distribution is a solution of a set of implicit equations, either nonlinear stochastic differential equations resembling the McKean-Vlasov equations or non-local partial differential equations resembling the McKean-Vlasov-Fokker-Planck equations. We prove the well-posedness of the McKean-Vlasov equations, i.e. the existence and uniqueness of a solution. We also show the results of some numerical experiments that indicate that the mean-field equations are a good representation of the mean activity of a finite size network, even for modest sizes. These experiments also indicate that the McKean-Vlasov-Fokker-Planck equations may be a good way to understand the mean-field dynamics through, e.g. a bifurcation analysis. Mathematics Subject Classification (2000): 60F99, 60B10, 92B20, 82C32, 82C80, 35Q80. PMID:22657695

  2. Rainfall as a landslides triggering factor in NE of Algeria and hydrological responses: Field monitoring in sample site (East of Constantine).

    Science.gov (United States)

    Nabil, Manchar; Chaouki, Benabbas

    2017-04-01

    The field monitoring is an important tool to evaluate, identify and characterise landslides events. North east of Algeria is characterised by the most widespread landslides, in particular in the region of Constantine. Results relative to one sample site (representative of the study area) characterised by a particular geological structure, where field monitoring has been carried out for adequate time intervals. They are in fact illustrated in the present work. Actually, we consider that rainfall is the most common trigger of landslides (Crozier, 1986; Corominas, 2000). Geologically, Tafrent zone is considered as an area with outcroppings formed by a sort of "melange structure" made up of blocks and fragments of sandstones, clays, shale and marles in a prevalently clayey matrix. The morphology is in particular with elevation range from 850 m to 1100 m, which is a moderate steep gradient. In the study area, a piezometer monitoring network and rain gauge give indication about the hydrological response of the slope in that very area where a big infrastructure has been recently constructed (E/W Highway segment). Piezometric levels measured as well as rainfall permit to identify some relationships between them (cumulative rainfall and piezometric levels). These latter levels increase especially when we have long time pluviometric period (winter season). It shows a relationship with changes in values of cumulative rainfall. It represents necessary, but not sufficient reasons for critical stability conditions in the considered area, in relation to possible scenarios of widespread landslide events. The results obtained from this study can be useful in many ways such as helping local authorities to plan future development activities. Keywords: Rainfall, Widespread Landslides, Piezometric levels, Tafrent.

  3. Information Potential Fields Navigation in Wireless Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yong Qi

    2011-05-01

    Full Text Available As wireless sensor networks (WSNs are increasingly being deployed in some important applications, it becomes imperative that we consider application requirements in in-network processes. We intend to use a WSN to aid information querying and navigation within a dynamic and real-time environment. We propose a novel method that relies on the heat diffusion equation to finish the navigation process conveniently and easily. From the perspective of theoretical analysis, our proposed work holds the lower constraint condition. We use multiple scales to reach the goal of accurate navigation. We present a multi-scale gradient descent method to satisfy users’ requirements in WSNs. Formula derivations and simulations show that the method is accurately and efficiently able to solve typical sensor network configuration information navigation problems. Simultaneously, the structure of heat diffusion equation allows more flexibility and adaptability in searching algorithm designs.

  4. Bidirectional Long Short-Term Memory Network with a Conditional Random Field Layer for Uyghur Part-Of-Speech Tagging

    Directory of Open Access Journals (Sweden)

    Maihemuti Maimaiti

    2017-11-01

    Full Text Available Uyghur is an agglutinative and a morphologically rich language; natural language processing tasks in Uyghur can be a challenge. Word morphology is important in Uyghur part-of-speech (POS tagging. However, POS tagging performance suffers from error propagation of morphological analyzers. To address this problem, we propose a few models for POS tagging: conditional random fields (CRF, long short-term memory (LSTM, bidirectional LSTM networks (BI-LSTM, LSTM networks with a CRF layer, and BI-LSTM networks with a CRF layer. These models do not depend on stemming and word disambiguation for Uyghur and combine hand-crafted features with neural network models. State-of-the-art performance on Uyghur POS tagging is achieved on test data sets using the proposed approach: 98.41% accuracy on 15 labels and 95.74% accuracy on 64 labels, which are 2.71% and 4% improvements, respectively, over the CRF model results. Using engineered features, our model achieves further improvements of 0.2% (15 labels and 0.48% (64 labels. The results indicate that the proposed method could be an effective approach for POS tagging in other morphologically rich languages.

  5. Thin, Soft, Skin-Mounted Microfluidic Networks with Capillary Bursting Valves for Chrono-Sampling of Sweat.

    Science.gov (United States)

    Choi, Jungil; Kang, Daeshik; Han, Seungyong; Kim, Sung Bong; Rogers, John A

    2017-03-01

    Systems for time sequential capture of microliter volumes of sweat released from targeted regions of the skin offer the potential to enable analysis of temporal variations in electrolyte balance and biomarker concentration throughout a period of interest. Current methods that rely on absorbent pads taped to the skin do not offer the ease of use in sweat capture needed for quantitative tracking; emerging classes of electronic wearable sweat analysis systems do not directly manage sweat-induced fluid flows for sample isolation. Here, a thin, soft, "skin-like" microfluidic platform is introduced that bonds to the skin to allow for collection and storage of sweat in an interconnected set of microreservoirs. Pressure induced by the sweat glands drives flow through a network of microchannels that incorporates capillary bursting valves designed to open at different pressures, for the purpose of passively guiding sweat through the system in sequential fashion. A representative device recovers 1.8 µL volumes of sweat each from 0.8 min of sweating into a set of separate microreservoirs, collected from 0.03 cm 2 area of skin with approximately five glands, corresponding to a sweat rate of 0.60 µL min -1 per gland. Human studies demonstrate applications in the accurate chemical analysis of lactate, sodium, and potassium concentrations and their temporal variations. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Sample Entropy Analysis of EEG Signals via Artificial Neural Networks to Model Patients’ Consciousness Level Based on Anesthesiologists Experience

    Directory of Open Access Journals (Sweden)

    George J. A. Jiang

    2015-01-01

    Full Text Available Electroencephalogram (EEG signals, as it can express the human brain’s activities and reflect awareness, have been widely used in many research and medical equipment to build a noninvasive monitoring index to the depth of anesthesia (DOA. Bispectral (BIS index monitor is one of the famous and important indicators for anesthesiologists primarily using EEG signals when assessing the DOA. In this study, an attempt is made to build a new indicator using EEG signals to provide a more valuable reference to the DOA for clinical researchers. The EEG signals are collected from patients under anesthetic surgery which are filtered using multivariate empirical mode decomposition (MEMD method and analyzed using sample entropy (SampEn analysis. The calculated signals from SampEn are utilized to train an artificial neural network (ANN model through using expert assessment of consciousness level (EACL which is assessed by experienced anesthesiologists as the target to train, validate, and test the ANN. The results that are achieved using the proposed system are compared to BIS index. The proposed system results show that it is not only having similar characteristic to BIS index but also more close to experienced anesthesiologists which illustrates the consciousness level and reflects the DOA successfully.

  7. Dynamic Mobile RobotNavigation Using Potential Field Based Immune Network

    Directory of Open Access Journals (Sweden)

    Guan-Chun Luh

    2007-04-01

    Full Text Available This paper proposes a potential filed immune network (PFIN for dynamic navigation of mobile robots in an unknown environment with moving obstacles and fixed/moving targets. The Velocity Obstacle method is utilized to determine imminent obstacle collision of a robot moving in the time-varying environment. The response of the overall immune network is derived by the aid of fuzzy system. Simulation results are presented to verify the effectiveness of the proposed methodology in unknown environments with single and multiple moving obstacles

  8. Dark Matter Profiles in Dwarf Galaxies: A Statistical Sample Using High-Resolution Hα Velocity Fields from PCWI

    Science.gov (United States)

    Relatores, Nicole C.; Newman, Andrew B.; Simon, Joshua D.; Ellis, Richard; Truong, Phuongmai N.; Blitz, Leo

    2018-01-01

    We present high quality Hα velocity fields for a sample of nearby dwarf galaxies (log M/M⊙ = 8.4-9.8) obtained as part of the Dark Matter in Dwarf Galaxies survey. The purpose of the survey is to investigate the cusp-core discrepancy by quantifying the variation of the inner slope of the dark matter distributions of 26 dwarf galaxies, which were selected as likely to have regular kinematics. The data were obtained with the Palomar Cosmic Web Imager, located on the Hale 5m telescope. We extract rotation curves from the velocity fields and use optical and infrared photometry to model the stellar mass distribution. We model the total mass distribution as the sum of a generalized Navarro-Frenk-White dark matter halo along with the stellar and gaseous components. We present the distribution of inner dark matter density profile slopes derived from this analysis. For a subset of galaxies, we compare our results to an independent analysis based on CO observations. In future work, we will compare the scatter in inner density slopes, as well as their correlations with galaxy properties, to theoretical predictions for dark matter core creation via supernovae feedback.

  9. Field-Amplified Sample Injection-Micellar Electrokinetic Chromatography for the Determination of Benzophenones in Food Simulants

    Directory of Open Access Journals (Sweden)

    Cristina Félez

    2015-07-01

    Full Text Available A field-amplified sample injection-micellar electrokinetic chromatography (FASI-MEKC method for the determination of 14 benzophenones (BPs in a food simulant used in migration studies of food packaging materials was developed, allowing almost baseline separation in less than 21 min. The use of a 10 mM sodium dodecyl sulfate (SDS solution as sample matrix was mandatory to achieve FASI enhancement of the analyzed BPs. A 21- to 784-fold sensitivity enhancement was achieved with FASI-MEKC, obtaining limits of detection down to 5.1–68.4 µg/L, with acceptable run-to-run precisions (RSD values lower than 22.3% and accuracy (relative errors lower than 21.0%. Method performance was evaluated by quantifying BPs in the food simulant spiked at 500 µg/L (bellow the established specific migration limit for BP (600 µg/L by EU legislation. For a 95% confidence level, no statistical differences were observed between found and spiked concentrations (probability at the confidence level, p value, of 0.55, showing that the proposed FASI-MEKC method is suitable for the analysis of BPs in food packaging migration studies at the levels established by EU legislation.

  10. Field-amplified on-line sample stacking for separation and determination of cimaterol, clenbuterol and salbutamol using capillary electrophoresis.

    Science.gov (United States)

    Shi, Yanfang; Huang, Ying; Duan, Jianping; Chen, Hongqing; Chen, Guonan

    2006-08-25

    A capillary electrophoresis method, using field-amplified sample injection (FASI), was developed for separation and determination of some beta 2-agonists, such as cimaterol, clenbuterol and salbutamol. The optimum conditions for this system had been investigated in detail. The precision of the migration time, peak height and accuracy were determined in both intra-day (n = 5) and inter-day (n = 15) assays. Under the optimum conditions, the detection limits (defined as S/N = 3) of this method were found to be lower than 2.0 ng/mL for all of these three beta 2-agonists, which were much lower than that of the conventional electro-migration injection method, the enhancement factors were greatly improved to be 30-40-fold. Such lower detection limit lets this method to be suitable for determination of above-mentioned beta 2-agonists in the urine sample. The mean recoveries in urine were higher than 96.2%, 95.6% and 95.3% for cimaterol, clenbuterol and salbutamol, respectively, with relative standard deviations lower than 3.5%.

  11. Versatile pulsed laser setup for depth profiling analysis of multilayered samples in the field of cultural heritage

    Science.gov (United States)

    Mendes, N. F. C.; Osticioli, I.; Striova, J.; Sansonetti, A.; Becucci, M.; Castellucci, E.

    2009-04-01

    The present study considers the use of a nanosecond pulsed laser setup capable of performing laser induced breakdown spectroscopy (LIBS) and pulsed Raman spectroscopy for the study of multilayered objects in the field of cultural heritage. Controlled etching using the 4th harmonic 266 nm emission of a Nd:YAG laser source with a 8 ns pulse duration was performed on organic films and mineral strata meant to simulate different sequence of layers usually found in art objects such as in easel and mural paintings. The process of micro ablation coupled with powerful spectroscopic techniques operating with the same laser source, constitutes an interesting alternative to mechanical sampling especially when dealing with artworks such as ceramics and metal works which are problematic due to their hardness and brittleness. Another case is that of valuable pieces where sampling is not an option and the materials to analyse lie behind the surface. The capabilities and limitations of such instrumentation were assessed through several tests in order to characterize the trend of the laser ablation on different materials. Monitored ablation was performed on commercial sheets of polyethylene terephthalate (PET), a standard material of known thickness and mechanical stability, and rabbit glue, an adhesive often used in works of art. Measurements were finally carried out on a specimen with a stratigraphy similar to those found in real mural paintings.

  12. Increased DNA amplification success of non-invasive genetic samples by successful removal of inhibitors from faecal samples collected in the field

    DEFF Research Database (Denmark)

    Hebert, Louise; Darden, Safi K.; Pedersen, Bo Vest

    2011-01-01

    extracted from faeces is problematic because of high concentrations of inhibitors. Here we present a method for increasing the successful application of donor DNA extracted from faecal samples through inhibitor reduction. After standard extraction with a DNA stool kit we used a ‘Concentrated Chelex......The use of non-invasive genetic sampling (NGS) is becoming increasingly important in the study of wild animal populations. Obtaining DNA from faecal samples is of particular interest because faeces can be collected without deploying sample capture devices. However, PCR amplification of DNA...

  13. The effects of composition, temperature and sample size on the sintering of chem-prep high field varistors.

    Energy Technology Data Exchange (ETDEWEB)

    Garino, Terry J.

    2007-09-01

    The sintering behavior of Sandia chem-prep high field varistor materials was studied using techniques including in situ shrinkage measurements, optical and scanning electron microscopy and x-ray diffraction. A thorough literature review of phase behavior, sintering and microstructure in Bi{sub 2}O{sub 3}-ZnO varistor systems is included. The effects of Bi{sub 2}O{sub 3} content (from 0.25 to 0.56 mol%) and of sodium doping level (0 to 600 ppm) on the isothermal densification kinetics was determined between 650 and 825 C. At {ge} 750 C samples with {ge}0.41 mol% Bi{sub 2}O{sub 3} have very similar densification kinetics, whereas samples with {le}0.33 mol% begin to densify only after a period of hours at low temperatures. The effect of the sodium content was greatest at {approx}700 C for standard 0.56 mol% Bi{sub 2}O{sub 3} and was greater in samples with 0.30 mol% Bi{sub 2}O{sub 3} than for those with 0.56 mol%. Sintering experiments on samples of differing size and shape found that densification decreases and mass loss increases with increasing surface area to volume ratio. However, these two effects have different causes: the enhancement in densification as samples increase in size appears to be caused by a low oxygen internal atmosphere that develops whereas the mass loss is due to the evaporation of bismuth oxide. In situ XRD experiments showed that the bismuth is initially present as an oxycarbonate that transforms to metastable {beta}-Bi{sub 2}O{sub 3} by 400 C. At {approx}650 C, coincident with the onset of densification, the cubic binary phase, Bi{sub 38}ZnO{sub 58} forms and remains stable to >800 C, indicating that a eutectic liquid does not form during normal varistor sintering ({approx}730 C). Finally, the formation and morphology of bismuth oxide phase regions that form on the varistors surfaces during slow cooling were studied.

  14. Optimized sampling strategy of Wireless sensor network for validation of remote sensing products over heterogeneous coarse-resolution pixel

    Science.gov (United States)

    Peng, J.; Liu, Q.; Wen, J.; Fan, W.; Dou, B.

    2015-12-01

    Coarse-resolution satellite albedo products are increasingly applied in geographical researches because of their capability to characterize the spatio-temporal patterns of land surface parameters. In the long-term validation of coarse-resolution satellite products with ground measurements, the scale effect, i.e., the mismatch between point measurement and pixel observation becomes the main challenge, particularly over heterogeneous land surfaces. Recent advances in Wireless Sensor Networks (WSN) technologies offer an opportunity