WorldWideScience

Sample records for networks requires quantification

  1. Network-Based Isoform Quantification with RNA-Seq Data for Cancer Transcriptome Analysis.

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2015-12-01

    Full Text Available High-throughput mRNA sequencing (RNA-Seq is widely used for transcript quantification of gene isoforms. Since RNA-Seq data alone is often not sufficient to accurately identify the read origins from the isoforms for quantification, we propose to explore protein domain-domain interactions as prior knowledge for integrative analysis with RNA-Seq data. We introduce a Network-based method for RNA-Seq-based Transcript Quantification (Net-RSTQ to integrate protein domain-domain interaction network with short read alignments for transcript abundance estimation. Based on our observation that the abundances of the neighboring isoforms by domain-domain interactions in the network are positively correlated, Net-RSTQ models the expression of the neighboring transcripts as Dirichlet priors on the likelihood of the observed read alignments against the transcripts in one gene. The transcript abundances of all the genes are then jointly estimated with alternating optimization of multiple EM problems. In simulation Net-RSTQ effectively improved isoform transcript quantifications when isoform co-expressions correlate with their interactions. qRT-PCR results on 25 multi-isoform genes in a stem cell line, an ovarian cancer cell line, and a breast cancer cell line also showed that Net-RSTQ estimated more consistent isoform proportions with RNA-Seq data. In the experiments on the RNA-Seq data in The Cancer Genome Atlas (TCGA, the transcript abundances estimated by Net-RSTQ are more informative for patient sample classification of ovarian cancer, breast cancer and lung cancer. All experimental results collectively support that Net-RSTQ is a promising approach for isoform quantification. Net-RSTQ toolbox is available at http://compbio.cs.umn.edu/Net-RSTQ/.

  2. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    Science.gov (United States)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair

  3. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  4. NP Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Rotman, Lauren [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Tierney, Brian [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)

    2011-08-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. To support SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2011, ESnet and the Office of Nuclear Physics (NP), of the DOE SC, organized a workshop to characterize the networking requirements of the programs funded by NP. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  5. LHCb Online Networking Requirements

    CERN Document Server

    Jost, B

    2003-01-01

    This document describes the networking requirements of the LHCb online installation. It lists both quantitative aspects such as the number of required switch ports, as well as some qualitative features of the equipment, such as minimum buffer sizes in switches. The document comprises both the data acquisition network and the controls/general-purpose network. While the numbers represent our best current knowledge and are intended to give (in particular) network equipment manufacturers an overview of our needs, this document should not be confused with a market survey questionnaire or a formal tendering document. However the information contained in this document will be the input of any such document. A preliminary schedule for procurement and installation is also given.

  6. Fusion Energy Sciences Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [ESNet, Berkeley, CA (United States); Tierney, Brian [ESNet, Berkeley, CA (United States)

    2012-09-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In December 2011, ESnet and the Office of Fusion Energy Sciences (FES), of the DOE Office of Science (SC), organized a workshop to characterize the networking requirements of the programs funded by FES. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  7. Future Home Network Requirements

    DEFF Research Database (Denmark)

    Charbonnier, Benoit; Wessing, Henrik; Lannoo, Bart

    This paper presents the requirements for future Home Area Networks (HAN). Firstly, we discuss the applications and services as well as their requirements. Then, usage scenarios are devised to establish a first specification for the HAN. The main requirements are an increased bandwidth (towards 1...

  8. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2014-01-01

    Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  9. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    Science.gov (United States)

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  10. BER Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Alapaty, Kiran; Allen, Ben; Bell, Greg; Benton, David; Brettin, Tom; Canon, Shane; Dart, Eli; Cotter, Steve; Crivelli, Silvia; Carlson, Rich; Dattoria, Vince; Desai, Narayan; Egan, Richard; Tierney, Brian; Goodwin, Ken; Gregurick, Susan; Hicks, Susan; Johnston, Bill; de Jong, Bert; Kleese van Dam, Kerstin; Livny, Miron; Markowitz, Victor; McGraw, Jim; McCord, Raymond; Oehmen, Chris; Regimbal, Kevin; Shipman, Galen; Strand, Gary; Flick, Jeff; Turnbull, Susan; Williams, Dean; Zurawski, Jason

    2010-11-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2010 ESnet and the Office of Biological and Environmental Research, of the DOE Office of Science, organized a workshop to characterize the networking requirements of the science programs funded by BER. The requirements identified at the workshop are summarized and described in more detail in the case studies and the Findings section. A number of common themes emerged from the case studies and workshop discussions. One is that BER science, like many other disciplines, is becoming more and more distributed and collaborative in nature. Another common theme is that data set sizes are exploding. Climate Science in particular is on the verge of needing to manage exabytes of data, and Genomics is on the verge of a huge paradigm shift in the number of sites with sequencers and the amount of sequencer data being generated.

  11. BES Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Biocca, Alan; Carlson, Rich; Chen, Jackie; Cotter, Steve; Tierney, Brian; Dattoria, Vince; Davenport, Jim; Gaenko, Alexander; Kent, Paul; Lamm, Monica; Miller, Stephen; Mundy, Chris; Ndousse, Thomas; Pederson, Mark; Perazzo, Amedeo; Popescu, Razvan; Rouson, Damian; Sekine, Yukiko; Sumpter, Bobby; Dart, Eli; Wang, Cai-Zhuang -Z; Whitelam, Steve; Zurawski, Jason

    2011-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivityfor the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office ofScience programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

  12. BES Science Network Requirements

    International Nuclear Information System (INIS)

    Dart, Eli; Tierney, Brian; Biocca, A.; Carlson, R.; Chen, J.; Cotter, S.; Dattoria, V.; Davenport, J.; Gaenko, A.; Kent, P.; Lamm, M.; Miller, S.; Mundy, C.; Ndousse, T.; Pederson, M.; Perazzo, A.; Popescu, R.; Rouson, D.; Sekine, Y.; Sumpter, B.; Wang, C.-Z.; Whitelam, S.; Zurawski, J.

    2011-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

  13. Biological and Environmental Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Balaji, V. [Princeton Univ., NJ (United States). Earth Science Grid Federation (ESGF); Boden, Tom [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cowley, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dart, Eli [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Dattoria, Vince [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Desai, Narayan [Argonne National Lab. (ANL), Argonne, IL (United States); Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Foster, Ian [Argonne National Lab. (ANL), Argonne, IL (United States); Goldstone, Robin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gregurick, Susan [U.S. Dept. of Energy, Washington, DC (United States). Biological Systems Science Division; Houghton, John [U.S. Dept. of Energy, Washington, DC (United States). Biological and Environmental Research (BER) Program; Izaurralde, Cesar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnston, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Joseph, Renu [U.S. Dept. of Energy, Washington, DC (United States). Climate and Environmental Sciences Division; Kleese-van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lipton, Mary [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Monga, Inder [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Pritchard, Matt [British Atmospheric Data Centre (BADC), Oxon (United Kingdom); Rotman, Lauren [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Strand, Gary [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Stuart, Cory [Argonne National Lab. (ANL), Argonne, IL (United States); Tatusova, Tatiana [National Inst. of Health (NIH), Bethesda, MD (United States); Tierney, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Thomas, Brian [Univ. of California, Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zurawski, Jason [Internet2, Washington, DC (United States)

    2013-09-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

  14. Pore network quantification of sandstones under experimental CO2 injection using image analysis

    Science.gov (United States)

    Berrezueta, Edgar; González-Menéndez, Luís; Ordóñez-Casado, Berta; Olaya, Peter

    2015-04-01

    Automated-image identification and quantification of minerals, pores and textures together with petrographic analysis can be applied to improve pore system characterization in sedimentary rocks. Our case study is focused on the application of these techniques to study the evolution of rock pore network subjected to super critical CO2-injection. We have proposed a Digital Image Analysis (DIA) protocol that guarantees measurement reproducibility and reliability. This can be summarized in the following stages: (i) detailed description of mineralogy and texture (before and after CO2-injection) by optical and scanning electron microscopy (SEM) techniques using thin sections; (ii) adjustment and calibration of DIA tools; (iii) data acquisition protocol based on image capture with different polarization conditions (synchronized movement of polarizers); (iv) study and quantification by DIA that allow (a) identification and isolation of pixels that belong to the same category: minerals vs. pores in each sample and (b) measurement of changes in pore network, after the samples have been exposed to new conditions (in our case: SC-CO2-injection). Finally, interpretation of the petrography and the measured data by an automated approach were done. In our applied study, the DIA results highlight the changes observed by SEM and microscopic techniques, which consisted in a porosity increase when CO2 treatment occurs. Other additional changes were minor: variations in the roughness and roundness of pore edges, and pore aspect ratio, shown in the bigger pore population. Additionally, statistic tests of pore parameters measured were applied to verify that the differences observed between samples before and after CO2-injection were significant.

  15. HEP Science Network Requirements--Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

    2010-04-27

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans

  16. HEP Science Network Requirements. Final Report

    International Nuclear Information System (INIS)

    Dart, Eli; Tierney, Brian

    2010-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity

  17. ASCR Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Tierney, Brian

    2009-08-24

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2009 ESnet and the Office of Advanced Scientific Computing Research (ASCR), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by ASCR. The ASCR facilities anticipate significant increases in wide area bandwidth utilization, driven largely by the increased capabilities of computational resources and the wide scope of collaboration that is a hallmark of modern science. Many scientists move data sets between facilities for analysis, and in some cases (for example the Earth System Grid and the Open Science Grid), data distribution is an essential component of the use of ASCR facilities by scientists. Due to the projected growth in wide area data transfer needs, the ASCR supercomputer centers all expect to deploy and use 100 Gigabit per second networking technology for wide area connectivity as soon as that deployment is financially feasible. In addition to the network connectivity that ESnet provides, the ESnet Collaboration Services (ECS) are critical to several science communities. ESnet identity and trust services, such as the DOEGrids certificate authority, are widely used both by the supercomputer centers and by collaborations such as Open Science Grid (OSG) and the Earth System Grid (ESG). Ease of use is a key determinant of the scientific utility of network-based services. Therefore, a key enabling aspect for scientists beneficial use of high

  18. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  19. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  20. FES Science Network Requirements - Report of the Fusion Energy Sciences Network Requirements Workshop Conducted March 13 and 14, 2008

    International Nuclear Information System (INIS)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    2008-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote

  1. FES Science Network Requirements - Report of the Fusion Energy Sciences Network Requirements Workshop Conducted March 13 and 14, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    2008-07-10

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote

  2. In vivo MRS metabolite quantification using genetic optimization

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure.

  3. In vivo MRS metabolite quantification using genetic optimization

    International Nuclear Information System (INIS)

    Papakostas, G A; Mertzios, B G; Karras, D A; Van Ormondt, D; Graveron-Demilly, D

    2011-01-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure

  4. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    Science.gov (United States)

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  5. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Onofrejova, Lucia; Farkova, Marta [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Preisler, Jan, E-mail: preisler@chemi.muni.cz [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic)

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as {sup 18}O, {sup 15}N or {sup 13}C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  6. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  7. Application of Fuzzy Comprehensive Evaluation Method in Trust Quantification

    Directory of Open Access Journals (Sweden)

    Shunan Ma

    2011-10-01

    Full Text Available Trust can play an important role for the sharing of resources and information in open network environments. Trust quantification is thus an important issue in dynamic trust management. By considering the fuzziness and uncertainty of trust, in this paper, we propose a fuzzy comprehensive evaluation method to quantify trust along with a trust quantification algorithm. Simulation results show that the trust quantification algorithm that we propose can effectively quantify trust and the quantified value of an entity's trust is consistent with the behavior of the entity.

  8. Reducing Wind Tunnel Data Requirements Using Neural Networks

    Science.gov (United States)

    Ross, James C.; Jorgenson, Charles C.; Norgaard, Magnus

    1997-01-01

    The use of neural networks to minimize the amount of data required to completely define the aerodynamic performance of a wind tunnel model is examined. The accuracy requirements for commercial wind tunnel test data are very severe and are difficult to reproduce using neural networks. For the current work, multiple input, single output networks were trained using a Levenberg-Marquardt algorithm for each of the aerodynamic coefficients. When applied to the aerodynamics of a 55% scale model of a U.S. Air Force/ NASA generic fighter configuration, this scheme provided accurate models of the lift, drag, and pitching-moment coefficients. Using only 50% of the data acquired during, the wind tunnel test, the trained neural network had a predictive accuracy equal to or better than the accuracy of the experimental measurements.

  9. A microsensor array for quantification of lubricant contaminants using a back propagation artificial neural network

    International Nuclear Information System (INIS)

    Zhu, Xiaoliang; Du, Li; Zhe, Jiang; Liu, Bendong

    2016-01-01

    We present a method based on an electrochemical sensor array and a back propagation artificial neural network for detection and quantification of four properties of lubrication oil, namely water (0, 500 ppm, 1000 ppm), total acid number (TAN) (13.1, 13.7, 14.4, 15.6 mg KOH g −1 ), soot (0, 1%, 2%, 3%) and sulfur content (1.3%, 1.37%, 1.44%, 1.51%). The sensor array, consisting of four micromachined electrochemical sensors, detects the four properties with overlapping sensitivities. A total set of 36 oil samples containing mixtures of water, soot, and sulfuric acid with different concentrations were prepared for testing. The sensor array’s responses were then divided to three sets: training sets (80% data), validation sets (10%) and testing sets (10%). Several back propagation artificial neural network architectures were trained with the training and validation sets; one architecture with four input neurons, 50 and 5 neurons in the first and second hidden layer, and four neurons in the output layer was selected. The selected neural network was then tested using the four sets of testing data (10%). Test results demonstrated that the developed artificial neural network is able to quantitatively determine the four lubrication properties (water, TAN, soot, and sulfur content) with a maximum prediction error of 18.8%, 6.0%, 6.7%, and 5.4%, respectively, indicting a good match between the target and predicted values. With the developed network, the sensor array could be potentially used for online lubricant oil condition monitoring. (paper)

  10. Modeling Irrigation Networks for the Quantification of Potential Energy Recovering: A Case Study

    Directory of Open Access Journals (Sweden)

    Modesto Pérez-Sánchez

    2016-06-01

    Full Text Available Water irrigation systems are required to provide adequate pressure levels in any sort of network. Quite frequently, this requirement is achieved by using pressure reducing valves (PRVs. Nevertheless, the possibility of using hydraulic machines to recover energy instead of PRVs could reduce the energy footprint of the whole system. In this research, a new methodology is proposed to help water managers quantify the potential energy recovering of an irrigation water network with adequate conditions of topographies distribution. EPANET has been used to create a model based on probabilities of irrigation and flow distribution in real networks. Knowledge of the flows and pressures in the network is necessary to perform an analysis of economic viability. Using the proposed methodology, a case study has been analyzed in a typical Mediterranean region and the potential available energy has been estimated. The study quantifies the theoretical energy recoverable if hydraulic machines were installed in the network. Particularly, the maximum energy potentially recovered in the system has been estimated up to 188.23 MWh/year with a potential saving of non-renewable energy resources (coal and gas of CO2 137.4 t/year.

  11. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  12. Markets in real electric networks require reactive prices

    International Nuclear Information System (INIS)

    Hogan, W.W.

    1996-01-01

    Extending earlier seminal work, the author finds that locational spot price differences in an electric network provide the natural measure of the appropriate internodal transport charge. However, the problem of loop flow requires different economic intuition for interpreting the implications of spot pricing. The Direct Current model, which is the usual approximation for estimating spot prices, ignores reactive power effects; this approximation is best when thermal constraints create network congestion. However, when voltage constraints are problematic, the DC Load model is insufficient; a full AC Model is required to determine both real and reactive spot prices. 16 figs., 3 tabs., 22 refs

  13. Belle-II Experiment Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Asner, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Bell, Greg [ESnet; Carlson, Tim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Cowley, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Dart, Eli [ESnet; Erwin, Brock [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Godang, Romulus [Univ. of South Alabama, Mobile, AL (United States); Hara, Takanori [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Johnson, Jerry [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Johnson, Ron [Univ. of Washington, Seattle, WA (United States); Johnston, Bill [ESnet; Dam, Kerstin Kleese-van [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Kaneko, Toshiaki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Kubota, Yoshihiro [NII; Kuhr, Thomas [Karlsruhe Inst. of Technology (KIT) (Germany); McCoy, John [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Miyake, Hideki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Monga, Inder [ESnet; Nakamura, Motonori [NII; Piilonen, Leo [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Pordes, Ruth [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ray, Douglas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Russell, Richard [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schram, Malachi [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schroeder, Jim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Sevior, Martin [Univ. of Melbourne (Australia); Singh, Surya [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Suzuki, Soh [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Sasaki, Takashi [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Williams, Jim [Indiana Univ., Bloomington, IN (United States)

    2013-05-28

    The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is expected in 2015. In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration's work.

  14. What would dense atmospheric observation networks bring to the quantification of city CO2 emissions?

    Science.gov (United States)

    Wu, Lin; Broquet, Grégoire; Ciais, Philippe; Bellassen, Valentin; Vogel, Felix; Chevallier, Frédéric; Xueref-Remy, Irène; Wang, Yilong

    2016-06-01

    Cities currently covering only a very small portion ( directly release to the atmosphere about 44 % of global energy-related CO2, but they are associated with 71-76 % of CO2 emissions from global final energy use. Although many cities have set voluntary climate plans, their CO2 emissions are not evaluated by the monitoring, reporting, and verification (MRV) procedures that play a key role for market- or policy-based mitigation actions. Here we analyze the potential of a monitoring tool that could support the development of such procedures at the city scale. It is based on an atmospheric inversion method that exploits inventory data and continuous atmospheric CO2 concentration measurements from a network of stations within and around cities to estimate city CO2 emissions. This monitoring tool is configured for the quantification of the total and sectoral CO2 emissions in the Paris metropolitan area (˜ 12 million inhabitants and 11.4 TgC emitted in 2010) during the month of January 2011. Its performances are evaluated in terms of uncertainty reduction based on observing system simulation experiments (OSSEs). They are analyzed as a function of the number of sampling sites (measuring at 25 m a.g.l.) and as a function of the network design. The instruments presently used to measure CO2 concentrations at research stations are expensive (typically ˜ EUR 50 k per sensor), which has limited the few current pilot city networks to around 10 sites. Larger theoretical networks are studied here to assess the potential benefit of hypothetical operational lower-cost sensors. The setup of our inversion system is based on a number of diagnostics and assumptions from previous city-scale inversion experiences with real data. We find that, given our assumptions underlying the configuration of the OSSEs, with 10 stations only the uncertainty for the total city CO2 emission during 1 month is significantly reduced by the inversion by ˜ 42 %. It can be further reduced by extending the

  15. Requirements for data integration platforms in biomedical research networks: a reference model.

    Science.gov (United States)

    Ganzinger, Matthias; Knaup, Petra

    2015-01-01

    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.

  16. Nuclear Physics Science Network Requirements Workshop, May 2008 - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Ed., Brian L; Dart, Ed., Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-11-10

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In May 2008, ESnet and the Nuclear Physics (NP) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the NP Program Office. Most of the key DOE sites for NP related work will require significant increases in network bandwidth in the 5 year time frame. This includes roughly 40 Gbps for BNL, and 20 Gbps for NERSC. Total transatlantic requirements are on the order of 40 Gbps, and transpacific requirements are on the order of 30 Gbps. Other key sites are Vanderbilt University and MIT, which will need on the order of 20 Gbps bandwidth to support data transfers for the CMS Heavy Ion program. In addition to bandwidth requirements, the workshop emphasized several points in regard to science process and collaboration. One key point is the heavy reliance on Grid tools and infrastructure (both PKI and tools such as GridFTP) by the NP community. The reliance on Grid software is expected to increase in the future. Therefore, continued development and support of Grid software is very important to the NP science community. Another key finding is that scientific productivity is greatly enhanced by easy researcher-local access to instrument data. This is driving the creation of distributed repositories for instrument data at collaborating institutions, along with a corresponding increase in demand for network-based data transfers and the tools

  17. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  18. High Energy Physics and Nuclear Physics Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  19. Optimal Information Processing in Biochemical Networks

    Science.gov (United States)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  20. Generation of structural MR images from amyloid PET: Application to MR-less quantification.

    Science.gov (United States)

    Choi, Hongyoon; Lee, Dong Soo

    2017-12-07

    Structural magnetic resonance (MR) images concomitantly acquired with PET images can provide crucial anatomical information for precise quantitative analysis. However, in the clinical setting, not all the subjects have corresponding MR. Here, we developed a model to generate structural MR images from amyloid PET using deep generative networks. We applied our model to quantification of cortical amyloid load without structural MR. Methods: We used florbetapir PET and structural MR data of Alzheimer's Disease Neuroimaging Initiative database. The generative network was trained to generate realistic structural MR images from florbetapir PET images. After the training, the model was applied to the quantification of cortical amyloid load. PET images were spatially normalized to the template space using the generated MR and then standardized uptake value ratio (SUVR) of the target regions was measured by predefined regions-of-interests. A real MR-based quantification was used as the gold standard to measure the accuracy of our approach. Other MR-less methods, a normal PET template-based, multi-atlas PET template-based and PET segmentation-based normalization/quantification methods, were also tested. We compared performance of quantification methods using generated MR with that of MR-based and MR-less quantification methods. Results: Generated MR images from florbetapir PET showed visually similar signal patterns to the real MR. The structural similarity index between real and generated MR was 0.91 ± 0.04. Mean absolute error of SUVR of cortical composite regions estimated by the generated MR-based method was 0.04±0.03, which was significantly smaller than other MR-less methods (0.29±0.12 for the normal PET-template, 0.12±0.07 for multiatlas PET-template and 0.08±0.06 for PET segmentation-based methods). Bland-Altman plots revealed that the generated MR-based SUVR quantification was the closest to the SUVR values estimated by the real MR-based method. Conclusion

  1. Fast quantification of proton magnetic resonance spectroscopic imaging with artificial neural networks

    Science.gov (United States)

    Bhat, Himanshu; Sajja, Balasrinivasa Rao; Narayana, Ponnada A.

    2006-11-01

    Accurate quantification of the MRSI-observed regional distribution of metabolites involves relatively long processing times. This is particularly true in dealing with large amount of data that is typically acquired in multi-center clinical studies. To significantly shorten the processing time, an artificial neural network (ANN)-based approach was explored for quantifying the phase corrected (as opposed to magnitude) spectra. Specifically, in these studies radial basis function neural network (RBFNN) was used. This method was tested on simulated and normal human brain data acquired at 3T. The N-acetyl aspartate (NAA)/creatine (Cr), choline (Cho)/Cr, glutamate + glutamine (Glx)/Cr, and myo-inositol (mI)/Cr ratios in normal subjects were compared with the line fitting (LF) technique and jMRUI-AMARES analysis, and published values. The average NAA/Cr, Cho/Cr, Glx/Cr and mI/Cr ratios in normal controls were found to be 1.58 ± 0.13, 0.9 ± 0.08, 0.7 ± 0.17 and 0.42 ± 0.07, respectively. The corresponding ratios using the LF and jMRUI-AMARES methods were 1.6 ± 0.11, 0.95 ± 0.08, 0.78 ± 0.18, 0.49 ± 0.1 and 1.61 ± 0.15, 0.78 ± 0.07, 0.61 ± 0.18, 0.42 ± 0.13, respectively. These results agree with those published in literature. Bland-Altman analysis indicated an excellent agreement and minimal bias between the results obtained with RBFNN and other methods. The computational time for the current method was 15 s compared to approximately 10 min for the LF-based analysis.

  2. Network-Based Material Requirements Planning (NBMRP) in ...

    African Journals Online (AJOL)

    Network-Based Material Requirements Planning (NBMRP) in Product Development Project. ... International Journal of Development and Management Review ... To address the problems, this study evaluated the existing material planning practice, and formulated a NBMRP model out of the variables of the existing MRP and ...

  3. On the Development of Methodology for Planning and Cost-Modeling of a Wide Area Network

    OpenAIRE

    Ahmedi, Basri; Mitrevski, Pece

    2014-01-01

    The most important stages in designing a computer network in a wider geographical area include: definition of requirements, topological description, identification and calculation of relevant parameters (i.e. traffic matrix), determining the shortest path between nodes, quantification of the effect of various levels of technical and technological development of urban areas involved, the cost of technology, and the cost of services. These parameters differ for WAN networks in different regions...

  4. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    Science.gov (United States)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation

  5. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  6. IX : An OS for datacenter applications with aggressive networking requirements

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The conventional wisdom is that aggressive networking requirements, such as high packet rates for small messages and microsecond-scale tail latency, are best addressed outside the kernel, in a user-level networking stack. We present IX, a dataplane operating system designed to support low-latency, high-throughput and high-connection count applications.  Like classic operating systems such as Linux, IX provides strong protection guarantees to the networking stack.  However, and unlike classic operating systems, IX is designed for the ground up to support applications with aggressive networking requirements on dense multi-core platforms with 10GbE and 40GbE Ethernet NICs.  IX outperforms Linux by an order of magnitude on micro benchmarks, and by up to 3.6x when running an unmodified memcached, a popular key-value store. The presentation is based on the joint work with Adam Belay, George Prekas, Ana Klimovic, Sam Grossman and Christos Kozyrakis, published at OSDI 2014; Best P...

  7. BER Science Network Requirements Workshop -- July 26-27,2007

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L.; Dart, Eli

    2008-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In July 2007, ESnet and the Biological and Environmental Research (BER) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the BER Program Office. These included several large programs and facilities, including Atmospheric Radiation Measurement (ARM) Program and the ARM Climate Research Facility (ACRF), Bioinformatics and Life Sciences Programs, Climate Sciences Programs, the Environmental Molecular Sciences Laboratory at PNNL, the Joint Genome Institute (JGI). National Center for Atmospheric Research (NCAR) also participated in the workshop and contributed a section to this report due to the fact that a large distributed data repository for climate data will be established at NERSC, ORNL and NCAR, and this will have an effect on ESnet. Workshop participants were asked to codify their requirements in a 'case study' format, which summarizes the instruments and facilities necessary for the science and the process by which the science is done, with emphasis on the network services needed and the way in which the network is used. Participants were asked to consider three time scales in their case studies--the near term (immediately and up to 12 months in the future), the medium term (3-5 years in the future), and the long term (greater than 5 years in the future). In addition to achieving its goal of collecting and

  8. GMO quantification: valuable experience and insights for the future.

    Science.gov (United States)

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.

  9. Requirements for advanced decision support tools in future distribution network planning

    NARCIS (Netherlands)

    Grond, M.O.W.; Morren, J.; Slootweg, J.G.

    2013-01-01

    This paper describes the need and requirements for advanced decision support tools in future network planning from a distribution network operator perspective. The existing tools will no longer be satisfactory for future application due to present developments in the electricity sector that increase

  10. A survey on social networks to determine requirements for Learning Networks for professional development of university staff

    NARCIS (Netherlands)

    Brouns, Francis; Berlanga, Adriana; Fetter, Sibren; Bitter-Rijpkema, Marlies; Van Bruggen, Jan; Sloep, Peter

    2009-01-01

    Brouns, F., Berlanga, A. J., Fetter, S., Bitter-Rijpkema, M. E., Van Bruggen, J. M., & Sloep, P. B. (2011). A survey on social networks to determine requirements for Learning Networks for professional development of university staff. International Journal of Web Based Communities, 7(3), 298-311.

  11. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R.

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general. PMID:26536227

  12. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  13. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Directory of Open Access Journals (Sweden)

    Udit Bhatia

    Full Text Available The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  14. Iron overload in the liver diagnostic and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)

    2007-03-15

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.

  15. Iron overload in the liver diagnostic and quantification

    International Nuclear Information System (INIS)

    Alustiza, Jose M.; Castiella, Agustin; Juan, Maria D. de; Emparanza, Jose I.; Artetxe, Jose; Uranga, Maite

    2007-01-01

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification

  16. Implementing size-optimal discrete neural networks require analog circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  17. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  18. Key Technologies in the Context of Future Networks: Operational and Management Requirements

    Directory of Open Access Journals (Sweden)

    Lorena Isabel Barona López

    2016-12-01

    Full Text Available The concept of Future Networks is based on the premise that current infrastructures require enhanced control, service customization, self-organization and self-management capabilities to meet the new needs in a connected society, especially of mobile users. In order to provide a high-performance mobile system, three main fields must be improved: radio, network, and operation and management. In particular, operation and management capabilities are intended to enable business agility and operational sustainability, where the addition of new services does not imply an excessive increase in capital or operational expenditures. In this context, a set of key-enabled technologies have emerged in order to aid in this field. Concepts such as Software Defined Network (SDN, Network Function Virtualization (NFV and Self-Organized Networks (SON are pushing traditional systems towards the next 5G network generation.This paper presents an overview of the current status of these promising technologies and ongoing works to fulfill the operational and management requirements of mobile infrastructures. This work also details the use cases and the challenges, taking into account not only SDN, NFV, cloud computing and SON but also other paradigms.

  19. Nuclear Physics Science Network Requirements Workshop, May 6 and 7, 2008. Final Report

    International Nuclear Information System (INIS)

    Tierney, Ed. Brian L; Dart, Ed. Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In May 2008, ESnet and the Nuclear Physics (NP) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the NP Program Office. Most of the key DOE sites for NP related work will require significant increases in network bandwidth in the 5 year time frame. This includes roughly 40 Gbps for BNL, and 20 Gbps for NERSC. Total transatlantic requirements are on the order of 40 Gbps, and transpacific requirements are on the order of 30 Gbps. Other key sites are Vanderbilt University and MIT, which will need on the order of 20 Gbps bandwidth to support data transfers for the CMS Heavy Ion program. In addition to bandwidth requirements, the workshop emphasized several points in regard to science process and collaboration. One key point is the heavy reliance on Grid tools and infrastructure (both PKI and tools such as GridFTP) by the NP community. The reliance on Grid software is expected to increase in the future. Therefore, continued development and support of Grid software is very important to the NP science community. Another key finding is that scientific productivity is greatly enhanced by easy researcher-local access to instrument data. This is driving the creation of distributed repositories for instrument data at collaborating institutions, along with a corresponding increase in demand for network-based data transfers and the tools

  20. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  1. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography.

    Science.gov (United States)

    Loss, Leandro A; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2012-10-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.

  2. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  3. Network Security Guideline

    Science.gov (United States)

    1993-06-01

    3.2.15.3 ISDN Services over the Telephone Network 3 Integrated Services Digital Network (ISDN) services are subject to the same restrictions as router...to be audited: [SYS$SYSTEM]SYS.EXE, LOGINOUT.EXE, STARTUP.COM, RIGHTSLIST.DAT [SYS$ LIBARY ] I SECURESHR.EXE [SYS$ROOT] SYSEXE.DIR, SYSLIB.DIR...quantification) of the encoded value; ASCII is normally used for asynchronous transmission. compare with digital . ASYNCHRONOUS-Data transmission that is

  4. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Contemporary Network Proteomics and Its Requirements

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon; Sng, Judy Chia Ghee

    2013-01-01

    The integration of networks with genomics (network genomics) is a familiar field. Conventional network analysis takes advantage of the larger coverage and relative stability of gene expression measurements. Network proteomics on the other hand has to develop further on two critical factors: (1) expanded data coverage and consistency, and (2) suitable reference network libraries, and data mining from them. Concerning (1) we discuss several contemporary themes that can improve data quality, which in turn will boost the outcome of downstream network analysis. For (2), we focus on network analysis developments, specifically, the need for context-specific networks and essential considerations for localized network analysis. PMID:24833333

  6. PCR amplification of repetitive sequences as a possible approach in relative species quantification

    DEFF Research Database (Denmark)

    Ballin, Nicolai Zederkopff; Vogensen, Finn Kvist; Karlsson, Anders H

    2012-01-01

    Abstract Both relative and absolute quantifications are possible in species quantification when single copy genomic DNA is used. However, amplification of single copy genomic DNA does not allow a limit of detection as low as one obtained from amplification of repetitive sequences. Amplification...... of repetitive sequences is therefore frequently used in absolute quantification but problems occur in relative quantification as the number of repetitive sequences is unknown. A promising approach was developed where data from amplification of repetitive sequences were used in relative quantification of species...... to relatively quantify the amount of chicken DNA in a binary mixture of chicken DNA and pig DNA. However, the designed PCR primers lack the specificity required for regulatory species control....

  7. Soil Pore Network Visualisation and Quantification using ImageJ

    DEFF Research Database (Denmark)

    Garbout, Amin; Pajor, Radoslaw; Otten, Wilfred

    Abstract Soil is one of the most complex materials on the earth, within which many biological, physical and chemical processes that support life and affect climate change take place. A much more detailed knowledge of the soil system is required to improve our ability to develop soil management...... strategies to preserve this limited resource. Many of those processes occur at micro scales. For long our ability to study soils non-destructively at microscopic scales has been limited, but recent developments in the use of X-ray Computed Tomography has offered great opportunities to quantify the 3-D...... geometry of soil pores. In this study we look at how networks that summarize the geometry of pores in soil are affected by soil structure. One of the objectives is to develop a robust and reproducible image analysis technique to produce quantitative knowledge on soil architecture from high resolution 3D...

  8. Group Centric Networking: Addressing Information Sharing Requirements at the Tactical Edge

    Science.gov (United States)

    2016-04-10

    Group Centric Networking: Addressing Information Sharing Requirements at the Tactical Edge Bow-Nan Cheng, Greg Kuperman, Patricia Deutsch, Logan...been a large push in the U.S. Department of Defense to move to an all Internet Protocol (IP) infrastructure, particularly on the tactical edge . IP and...lossy links, and scaling to large numbers of users. Unfortunately, these are the exact conditions military tactical edge networks must operate within

  9. 77 FR 27381 - Financial Crimes Enforcement Network: Customer Due Diligence Requirements for Financial...

    Science.gov (United States)

    2012-05-10

    ...-AB15 Financial Crimes Enforcement Network: Customer Due Diligence Requirements for Financial... concerning customer due diligence requirements for financial institutions. DATES: Written comments on the... customer due diligence requirements for financial institutions.\\1\\ FinCEN received several comments on the...

  10. Real-Time PCR Quantification of Chloroplast DNA Supports DNA Barcoding of Plant Species.

    Science.gov (United States)

    Kikkawa, Hitomi S; Tsuge, Kouichiro; Sugita, Ritsuko

    2016-03-01

    Species identification from extracted DNA is sometimes needed for botanical samples. DNA quantification is required for an accurate and effective examination. If a quantitative assay provides unreliable estimates, a higher quantity of DNA than the estimated amount may be used in additional analyses to avoid failure to analyze samples from which extracting DNA is difficult. Compared with conventional methods, real-time quantitative PCR (qPCR) requires a low amount of DNA and enables quantification of dilute DNA solutions accurately. The aim of this study was to develop a qPCR assay for quantification of chloroplast DNA from taxonomically diverse plant species. An absolute quantification method was developed using primers targeting the ribulose-1,5-bisphosphate carboxylase/oxygenase large subunit (rbcL) gene using SYBR Green I-based qPCR. The calibration curve was generated using the PCR amplicon as the template. DNA extracts from representatives of 13 plant families common in Japan. This demonstrates that qPCR analysis is an effective method for quantification of DNA from plant samples. The results of qPCR assist in the decision-making will determine the success or failure of DNA analysis, indicating the possibility of optimization of the procedure for downstream reactions.

  11. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  12. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Science.gov (United States)

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary

  13. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  14. ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks.

    Science.gov (United States)

    Kozák, Lajos R; van Graan, Louis André; Chaudhary, Umair J; Szabó, Ádám György; Lemieux, Louis

    2017-12-01

    Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of 'engagement' of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    Science.gov (United States)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  16. Reliability of lifeline networks under seismic hazard

    International Nuclear Information System (INIS)

    Selcuk, A. Sevtap; Yuecemen, M. Semih

    1999-01-01

    Lifelines, such as pipelines, transportation, communication and power transmission systems, are networks which extend spatially over large geographical regions. The quantification of the reliability (survival probability) of a lifeline under seismic threat requires attention, as the proper functioning of these systems during or after a destructive earthquake is vital. In this study, a lifeline is idealized as an equivalent network with the capacity of its elements being random and spatially correlated and a comprehensive probabilistic model for the assessment of the reliability of lifelines under earthquake loads is developed. The seismic hazard that the network is exposed to is described by a probability distribution derived by using the past earthquake occurrence data. The seismic hazard analysis is based on the 'classical' seismic hazard analysis model with some modifications. An efficient algorithm developed by Yoo and Deo (Yoo YB, Deo N. A comparison of algorithms for terminal pair reliability. IEEE Transactions on Reliability 1988; 37: 210-215) is utilized for the evaluation of the network reliability. This algorithm eliminates the CPU time and memory capacity problems for large networks. A comprehensive computer program, called LIFEPACK is coded in Fortran language in order to carry out the numerical computations. Two detailed case studies are presented to show the implementation of the proposed model

  17. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    Science.gov (United States)

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  18. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  19. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Imran Babar

    Full Text Available Value-based requirements engineering plays a vital role in the development of value-based software (VBS. Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  20. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  1. Improving mine-mill water network design by reducing water and energy requirements

    Energy Technology Data Exchange (ETDEWEB)

    Gunson, A.J.; Klein, B.; Veiga, M. [British Columbia Univ., Vancouver, BC (Canada). Norman B. Keevil Inst. of Mining Engineering

    2010-07-01

    Mining is an energy-intensive industry, and most processing mills use wet processes to separate minerals from ore. This paper discussed water reduction, reuse and recycling options for a mining and mill operation network. A mine water network design was then proposed in order to identify and reduce water and system energy requirements. This included (1) a description of site water balance, (2) a description of potential water sources, (3) a description of water consumers, (4) the construction of energy requirement matrices, and (5) the use of linear programming to reduce energy requirements. The design was used to determine a site water balance as well as to specify major water consumers during mining and mill processes. Potential water supply combinations, water metering technologies, and recycling options were evaluated in order to identify the most efficient energy and water use combinations. The method was used to highlight potential energy savings from the integration of heating and cooling systems with plant water systems. 43 refs., 4 tabs., 3 figs.

  2. Quantification of whey in fluid milk using confocal Raman microscopy and artificial neural network.

    Science.gov (United States)

    Alves da Rocha, Roney; Paiva, Igor Moura; Anjos, Virgílio; Furtado, Marco Antônio Moreira; Bell, Maria José Valenzuela

    2015-06-01

    In this work, we assessed the use of confocal Raman microscopy and artificial neural network as a practical method to assess and quantify adulteration of fluid milk by addition of whey. Milk samples with added whey (from 0 to 100%) were prepared, simulating different levels of fraudulent adulteration. All analyses were carried out by direct inspection at the light microscope after depositing drops from each sample on a microscope slide and drying them at room temperature. No pre- or posttreatment (e.g., sample preparation or spectral correction) was required in the analyses. Quantitative determination of adulteration was performed through a feed-forward artificial neural network (ANN). Different ANN configurations were evaluated based on their coefficient of determination (R2) and root mean square error values, which were criteria for selecting the best predictor model. In the selected model, we observed that data from both training and validation subsets presented R2>99.99%, indicating that the combination of confocal Raman microscopy and ANN is a rapid, simple, and efficient method to quantify milk adulteration by whey. Because sample preparation and postprocessing of spectra were not required, the method has potential applications in health surveillance and food quality monitoring. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Double-layer Tablets of Lornoxicam: Validation of Quantification ...

    African Journals Online (AJOL)

    Double-layer Tablets of Lornoxicam: Validation of Quantification Method, In vitro Dissolution and Kinetic Modelling. ... Satisfactory results were obtained from all the tablet formulations met compendial requirements. The slowest drug release rate was obtained with tablet cores based on PVP K90 (1.21 mg%.h-1).

  5. Quantification of trace-level DNA by real-time whole genome amplification.

    Science.gov (United States)

    Kang, Min-Jung; Yu, Hannah; Kim, Sook-Kyung; Park, Sang-Ryoul; Yang, Inchul

    2011-01-01

    Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR) for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES), an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  6. Corporate Data Network (CDN) data requirements task. Enterprise Model. Volume 1

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol. 2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol. 3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  7. Progress of the COST Action TU1402 on the Quantification of the Value of Structural Health Monitoring

    DEFF Research Database (Denmark)

    Thöns, Sebastian; Limongelli, Maria Pina; Ivankovic, Ana Mandic

    2017-01-01

    This paper summarizes the development of Value of Structural Health Monitoring (SHM) Information analyses and introduces the development, objectives and approaches of the COST Action TU1402 on this topic. SHM research and engineering has been focused on the extraction of loading, degradation...... for its quantification. This challenge can be met with Value of SHM Information analyses facilitating that the SHM contribution to substantial benefits for life safety, economy and beyond can be may be quantified, demonstrated and utilized. However, Value of SHM Information analyses involve complex models...... encompassing the infrastructure and the SHM systems, their functionality and thus require the interaction of several research disciplines. For progressing on these points, a scientific networking and dissemination project namely the COST Action TU1402 has been initiated....

  8. Damage Localization and Quantification of Earthquake Excited RC-Frames

    DEFF Research Database (Denmark)

    Skjærbæk, P.S.; Nielsen, Søren R.K.; Kirkegaard, Poul Henning

    In the paper a recently proposed method for damage localization and quantification of RC-structures from response measurements is tested on experimental data. The method investigated requires at least one response measurement along the structure and the ground surface acceleration. Further, the t...

  9. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  10. Impact of Distributed Generation Grid Code Requirements on Islanding Detection in LV Networks

    Directory of Open Access Journals (Sweden)

    Fabio Bignucolo

    2017-01-01

    Full Text Available The recent growing diffusion of dispersed generation in low voltage (LV distribution networks is entailing new rules to make local generators participate in network stability. Consequently, national and international grid codes, which define the connection rules for stability and safety of electrical power systems, have been updated requiring distributed generators and electrical storage systems to supply stabilizing contributions. In this scenario, specific attention to the uncontrolled islanding issue has to be addressed since currently required anti-islanding protection systems, based on relays locally measuring voltage and frequency, could no longer be suitable. In this paper, the effects on the interface protection performance of different LV generators’ stabilizing functions are analysed. The study takes into account existing requirements, such as the generators’ active power regulation (according to the measured frequency and reactive power regulation (depending on the local measured voltage. In addition, the paper focuses on other stabilizing features under discussion, derived from the medium voltage (MV distribution network grid codes or proposed in the literature, such as fast voltage support (FVS and inertia emulation. Stabilizing functions have been reproduced in the DIgSILENT PowerFactory 2016 software environment, making use of its native programming language. Later, they are tested both alone and together, aiming to obtain a comprehensive analysis on their impact on the anti-islanding protection effectiveness. Through dynamic simulations in several network scenarios the paper demonstrates the detrimental impact that such stabilizing regulations may have on loss-of-main protection effectiveness, leading to an increased risk of unintentional islanding.

  11. Towards tributyltin quantification in natural water at the Environmental Quality Standard level required by the Water Framework Directive.

    Science.gov (United States)

    Alasonati, Enrica; Fettig, Ina; Richter, Janine; Philipp, Rosemarie; Milačič, Radmila; Sčančar, Janez; Zuliani, Tea; Tunç, Murat; Bilsel, Mine; Gören, Ahmet Ceyhan; Fisicaro, Paola

    2016-11-01

    The European Union (EU) has included tributyltin (TBT) and its compounds in the list of priority water pollutants. Quality standards demanded by the EU Water Framework Directive (WFD) require determination of TBT at so low concentration level that chemical analysis is still difficult and further research is needed to improve the sensitivity, the accuracy and the precision of existing methodologies. Within the frame of a joint research project "Traceable measurements for monitoring critical pollutants under the European Water Framework Directive" in the European Metrology Research Programme (EMRP), four metrological and designated institutes have developed a primary method to quantify TBT in natural water using liquid-liquid extraction (LLE) and species-specific isotope dilution mass spectrometry (SSIDMS). The procedure has been validated at the Environmental Quality Standard (EQS) level (0.2ngL(-1) as cation) and at the WFD-required limit of quantification (LOQ) (0.06ngL(-1) as cation). The LOQ of the methodology was 0.06ngL(-1) and the average measurement uncertainty at the LOQ was 36%, which agreed with WFD requirements. The analytical difficulties of the method, namely the presence of TBT in blanks and the sources of measurement uncertainties, as well as the interlaboratory comparison results are discussed in detail. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Real-time ligation chain reaction for DNA quantification and identification on the FO-SPR.

    Science.gov (United States)

    Knez, Karel; Spasic, Dragana; Delport, Filip; Lammertyn, Jeroen

    2015-05-15

    Different assays have been developed in the past years to meet point-of-care diagnostic tests requirements for fast and sensitive quantification and identification of targets. In this paper, we developed the ligation chain reaction (LCR) assay on the Fiber Optic Surface Plasmon Resonance (FO-SPR) platform, which enabled simultaneous quantification and cycle-to-cycle identification of DNA during amplification. The newly developed assay incorporated FO-SPR DNA melting assay, previously developed by our group. This required establishment of several assay parameters, including buffer ionic strength and thermal ramping speed as these parameters both influence the ligation enzyme performance and the hybridization yield of the gold nanoparticles (Au NPs) on the FO-SPR sensor. Quantification and identification of DNA targets was achieved over a wide concentration range with a calibration curve spanning 7 orders of magnitude and LOD of 13.75 fM. Moreover, the FO-SPR LCR assay could discriminate single nucleotide polymorphism (SNPs) without any post reaction analysis, featuring thus all the essential requirements of POC tests. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Direct quantification of lipopeptide biosurfactants in biological samples via HPLC and UPLC-MS requires sample modification with an organic solvent.

    Science.gov (United States)

    Biniarz, Piotr; Łukaszewicz, Marcin

    2017-06-01

    The rapid and accurate quantification of biosurfactants in biological samples is challenging. In contrast to the orcinol method for rhamnolipids, no simple biochemical method is available for the rapid quantification of lipopeptides. Various liquid chromatography (LC) methods are promising tools for relatively fast and exact quantification of lipopeptides. Here, we report strategies for the quantification of the lipopeptides pseudofactin and surfactin in bacterial cultures using different high- (HPLC) and ultra-performance liquid chromatography (UPLC) systems. We tested three strategies for sample pretreatment prior to LC analysis. In direct analysis (DA), bacterial cultures were injected directly and analyzed via LC. As a modification, we diluted the samples with methanol and detected an increase in lipopeptide recovery in the presence of methanol. Therefore, we suggest this simple modification as a tool for increasing the accuracy of LC methods. We also tested freeze-drying followed by solvent extraction (FDSE) as an alternative for the analysis of "heavy" samples. In FDSE, the bacterial cultures were freeze-dried, and the resulting powder was extracted with different solvents. Then, the organic extracts were analyzed via LC. Here, we determined the influence of the extracting solvent on lipopeptide recovery. HPLC methods allowed us to quantify pseudofactin and surfactin with run times of 15 and 20 min per sample, respectively, whereas UPLC quantification was as fast as 4 and 5.5 min per sample, respectively. Our methods provide highly accurate measurements and high recovery levels for lipopeptides. At the same time, UPLC-MS provides the possibility to identify lipopeptides and their structural isoforms.

  14. Techniques for quantification of liver fat in risk stratification of diabetics

    International Nuclear Information System (INIS)

    Kuehn, J.P.; Spoerl, M.C.; Mahlke, C.; Hegenscheid, K.

    2015-01-01

    Fatty liver disease plays an important role in the development of type 2 diabetes. Accurate techniques for detection and quantification of liver fat are essential for clinical diagnostics. Chemical shift-encoded magnetic resonance imaging (MRI) is a simple approach to quantify liver fat content. Liver fat quantification using chemical shift-encoded MRI is influenced by several bias factors, such as T2* decay, T1 recovery and the multispectral complexity of fat. The confounder corrected proton density fat fraction is a simple approach to quantify liver fat with comparable results independent of the software and hardware used. The proton density fat fraction is an accurate biomarker for assessment of liver fat. An accurate and reproducible quantification of liver fat using chemical shift-encoded MRI requires a calculation of the proton density fat fraction. (orig.) [de

  15. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    Science.gov (United States)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  16. The intermediate filament network protein, vimentin, is required for parvoviral infection

    Energy Technology Data Exchange (ETDEWEB)

    Fay, Nikta; Panté, Nelly, E-mail: pante@zoology.ubc.ca

    2013-09-15

    Intermediate filaments (IFs) have recently been shown to serve novel roles during infection by many viruses. Here we have begun to study the role of IFs during the early steps of infection by the parvovirus minute virus of mice (MVM). We found that during early infection with MVM, after endosomal escape, the vimentin IF network was considerably altered, yielding collapsed immunofluorescence staining near the nuclear periphery. Furthermore, we found that vimentin plays an important role in the life cycle of MVM. The number of cells, which successfully replicated MVM, was reduced in infected cells in which the vimentin network was genetically or pharmacologically modified; viral endocytosis, however, remained unaltered. Perinuclear accumulation of MVM-containing vesicles was reduced in cells lacking vimentin. Our data suggests that vimentin is required for the MVM life cycle, presenting possibly a dual role: (1) following MVM escape from endosomes and (2) during endosomal trafficking of MVM. - Highlights: • MVM infection changes the distribution of the vimentin network to perinuclear regions. • Disrupting the vimentin network with acrylamide decreases MVM replication. • MVM replication is significantly reduced in vimentin-null cells. • Distribution of MVM-containing vesicles is affected in MVM infected vimentin-null cells.

  17. The intermediate filament network protein, vimentin, is required for parvoviral infection

    International Nuclear Information System (INIS)

    Fay, Nikta; Panté, Nelly

    2013-01-01

    Intermediate filaments (IFs) have recently been shown to serve novel roles during infection by many viruses. Here we have begun to study the role of IFs during the early steps of infection by the parvovirus minute virus of mice (MVM). We found that during early infection with MVM, after endosomal escape, the vimentin IF network was considerably altered, yielding collapsed immunofluorescence staining near the nuclear periphery. Furthermore, we found that vimentin plays an important role in the life cycle of MVM. The number of cells, which successfully replicated MVM, was reduced in infected cells in which the vimentin network was genetically or pharmacologically modified; viral endocytosis, however, remained unaltered. Perinuclear accumulation of MVM-containing vesicles was reduced in cells lacking vimentin. Our data suggests that vimentin is required for the MVM life cycle, presenting possibly a dual role: (1) following MVM escape from endosomes and (2) during endosomal trafficking of MVM. - Highlights: • MVM infection changes the distribution of the vimentin network to perinuclear regions. • Disrupting the vimentin network with acrylamide decreases MVM replication. • MVM replication is significantly reduced in vimentin-null cells. • Distribution of MVM-containing vesicles is affected in MVM infected vimentin-null cells

  18. Advanced communication and network requirements in Europe

    DEFF Research Database (Denmark)

    Falch, Morten; Enemark, Rasmus

    The report address diffusion of new tele-application, focusing on potential use and potential tele-trafic genrated as a consequense. The applications investigated are: Teleworking, distance learning, research and university network, applications aimed at SMEs, health networks, a trans European pu...... public administation network, city information highway, road-trafic manegement, air traffic control and electronic quotation.......The report address diffusion of new tele-application, focusing on potential use and potential tele-trafic genrated as a consequense. The applications investigated are: Teleworking, distance learning, research and university network, applications aimed at SMEs, health networks, a trans European...

  19. Real-time quantitative PCR for retrovirus-like particle quantification in CHO cell culture.

    Science.gov (United States)

    de Wit, C; Fautz, C; Xu, Y

    2000-09-01

    Chinese hamster ovary (CHO) cells have been widely used to manufacture recombinant proteins intended for human therapeutic uses. Retrovirus-like particles, which are apparently defective and non-infectious, have been detected in all CHO cells by electron microscopy (EM). To assure viral safety of CHO cell-derived biologicals, quantification of retrovirus-like particles in production cell culture and demonstration of sufficient elimination of such retrovirus-like particles by the down-stream purification process are required for product market registration worldwide. EM, with a detection limit of 1x10(6) particles/ml, is the standard retrovirus-like particle quantification method. The whole process, which requires a large amount of sample (3-6 litres), is labour intensive, time consuming, expensive, and subject to significant assay variability. In this paper, a novel real-time quantitative PCR assay (TaqMan assay) has been developed for the quantification of retrovirus-like particles. Each retrovirus particle contains two copies of the viral genomic particle RNA (pRNA) molecule. Therefore, quantification of retrovirus particles can be achieved by quantifying the pRNA copy number, i.e. every two copies of retroviral pRNA is equivalent to one retrovirus-like particle. The TaqMan assay takes advantage of the 5'-->3' exonuclease activity of Taq DNA polymerase and utilizes the PRISM 7700 Sequence Detection System of PE Applied Biosystems (Foster City, CA, U.S.A.) for automated pRNA quantification through a dual-labelled fluorogenic probe. The TaqMan quantification technique is highly comparable to the EM analysis. In addition, it offers significant advantages over the EM analysis, such as a higher sensitivity of less than 600 particles/ml, greater accuracy and reliability, higher sample throughput, more flexibility and lower cost. Therefore, the TaqMan assay should be used as a substitute for EM analysis for retrovirus-like particle quantification in CHO cell

  20. Techniques of biomolecular quantification through AMS detection of radiocarbon

    International Nuclear Information System (INIS)

    Vogel, S.J.; Turteltaub, K.W.; Frantz, C.; Felton, J.S.; Gledhill, B.L.

    1992-01-01

    Accelerator mass spectrometry offers a large gain over scintillation counting in sensitivity for detecting radiocarbon in biomolecular tracing. Application of this sensitivity requires new considerations of procedures to extract or isolate the carbon fraction to be quantified, to inventory all carbon in the sample, to prepare graphite from the sample for use in the spectrometer, and to derive a meaningful quantification from the measured isotope ratio. These procedures need to be accomplished without contaminating the sample with radiocarbon, which may be ubiquitous in laboratories and on equipment previously used for higher dose, scintillation experiments. Disposable equipment, materials and surfaces are used to control these contaminations. Quantification of attomole amounts of labeled substances are possible through these techniques

  1. Protocol for Quantification of Defects in Natural Fibres for Composites

    DEFF Research Database (Denmark)

    Mortensen, Ulrich Andreas; Madsen, Bo

    2014-01-01

    Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based...

  2. A network model for characterizing brine channels in sea ice

    Science.gov (United States)

    Lieblappen, Ross M.; Kumar, Deip D.; Pauls, Scott D.; Obbard, Rachel W.

    2018-03-01

    The brine pore space in sea ice can form complex connected structures whose geometry is critical in the governance of important physical transport processes between the ocean, sea ice, and surface. Recent advances in three-dimensional imaging using X-ray micro-computed tomography have enabled the visualization and quantification of the brine network morphology and variability. Using imaging of first-year sea ice samples at in situ temperatures, we create a new mathematical network model to characterize the topology and connectivity of the brine channels. This model provides a statistical framework where we can characterize the pore networks via two parameters, depth and temperature, for use in dynamical sea ice models. Our approach advances the quantification of brine connectivity in sea ice, which can help investigations of bulk physical properties, such as fluid permeability, that are key in both global and regional sea ice models.

  3. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  4. A study of the minimum number of slices required for quantification of pulmonary emphysema by computed tomography

    International Nuclear Information System (INIS)

    Hitsuda, Yutaka; Igishi, Tadashi; Kawasaki, Yuji

    2000-01-01

    We attempted to determine the minimum number of slices required for quantification of overall emphysema by computed tomography (CT). Forty-nine patients underwent CT scanning with a 15-mm slice interval, and 13 to 18 slices per patient were obtained. The percentage of low attenuation area (LAA%) per slice was measured with a method that we reported on previously, utilizing a CT program and NIH image. The average LAA% values for 1, 2, 3, and 6 slices evenly spaced through the lungs [LAA% (1), LAA% (2), LAA% (3), and LAA% (6)] were compared with those for all slices [LAA% (All)]. The correlation coefficients for LAA% (1), LAA% (2), LAA% (3), and LAA% (6) with LAA% (All) were 0.961, 0.981, 0.993, and 0.997, respectively. Mean differences ±SD were -3.20±4.21%, -2.32±3.00, -0.20±1.84, and -0.16±1.26, respectively. From these results, we concluded that overall emphysema can be quantified by using at least three slices: one each of the upper, middle, and lower lung. (author)

  5. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-01

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a

  6. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  7. Towards requirements elicitation in service-oriented business networks using value and goal modelling

    NARCIS (Netherlands)

    Mantovaneli Pessoa, Rodrigo; van Sinderen, Marten J.; Quartel, Dick; Shishkov, Boris; Cordeiro, J.; Ranchordas, A.

    2009-01-01

    Due to the contemporary trends towards increased focus on core competences and outsourcing of non-core activities, enterprises are forming strategic alliances and building business networks. This often requires cross enterprise interoperability and integration of their information systems, leading

  8. Ionic network analysis of tectosilicates: the example of coesite at variable pressure.

    Science.gov (United States)

    Reifenberg, Melina; Thomas, Noel W

    2018-04-01

    The method of ionic network analysis [Thomas (2017). Acta Cryst. B73, 74-86] is extended to tectosilicates through the example of coesite, the high-pressure polymorph of SiO 2 . The structural refinements of Černok et al. [Z. Kristallogr. (2014), 229, 761-773] are taken as the starting point for applying the method. Its purpose is to predict the unit-cell parameters and atomic coordinates at (p-T-X) values in-between those of diffraction experiments. The essential development step for tectosilicates is to define a pseudocubic parameterization of the O 4 cages of the SiO 4 tetrahedra. The six parameters a PC , b PC , c PC , α PC , β PC and γ PC allow a full quantification of the tetrahedral structure, i.e. distortion and enclosed volume. Structural predictions for coesite require that two separate quasi-planar networks are defined, one for the silicon ions and the other for the O 4 cage midpoints. A set of parametric curves is used to describe the evolution with pressure of these networks and the pseudocubic parameters. These are derived by fitting to the crystallographic data. Application of the method to monoclinic feldspars and to quartz and cristobalite is discussed. Further, a novel two-parameter quantification of the degree of tetrahedral distortion is described. At pressures in excess of ca 20.45 GPa it is not possible to find a self-consistent solution to the parametric curves for coesite, pointing to the likelihood of a phase transition.

  9. Quantification practices in the nuclear industry

    International Nuclear Information System (INIS)

    1986-01-01

    In this chapter the quantification of risk practices adopted by the nuclear industries in Germany, Britain and France are examined as representative of the practices adopted throughout Europe. From this examination a number of conclusions are drawn about the common features of the practices adopted. In making this survey, the views expressed in the report of the Task Force on Safety Goals/Objectives appointed by the Commission of the European Communities, are taken into account. For each country considered, the legal requirements for presentation of quantified risk assessment as part of the licensing procedure are examined, and the way in which the requirements have been developed for practical application are then examined. (author)

  10. Mapping and Quantification of Vascular Branching in Plants, Animals and Humans by VESGEN Software

    Science.gov (United States)

    Parsons-Wingerter, P. A.; Vickerman, M. B.; Keith, P. A.

    2010-01-01

    Humans face daunting challenges in the successful exploration and colonization of space, including adverse alterations in gravity and radiation. The Earth-determined biology of plants, animals and humans is significantly modified in such extraterrestrial environments. One physiological requirement shared by larger plants and animals with humans is a complex, highly branching vascular system that is dynamically responsive to cellular metabolism, immunological protection and specialized cellular/tissue function. VESsel GENeration (VESGEN) Analysis has been developed as a mature beta version, pre-release research software for mapping and quantification of the fractal-based complexity of vascular branching. Alterations in vascular branching pattern can provide informative read-outs of altered vascular regulation. Originally developed for biomedical applications in angiogenesis, VESGEN 2D has provided novel insights into the cytokine, transgenic and therapeutic regulation of angiogenesis, lymphangiogenesis and other microvascular remodeling phenomena. Vascular trees, networks and tree-network composites are mapped and quantified. Applications include disease progression from clinical ophthalmic images of the human retina; experimental regulation of vascular remodeling in the mouse retina; avian and mouse coronary vasculature, and other experimental models in vivo. We envision that altered branching in the leaves of plants studied on ISS such as Arabidopsis thaliana cans also be analyzed.

  11. Requirements of the integration of renewable energy into network charge regulation. Proposals for the further development of the network charge system. Final report

    International Nuclear Information System (INIS)

    Friedrichsen, Nele; Klobasa, Marian; Marwitz, Simon; Hilpert, Johannes; Sailer, Frank

    2016-01-01

    In this project we analyzed options to advance the network tariff system to support the German energy transition. A power system with high shares of renewables, requires more flexibility of supply and demand than the traditional system based on centralized, fossil power plants. Further, the power networks need to be adjusted and expanded. The transformation should aim at system efficiency i.e. look at both generation and network development. Network tariffs allocate the network cost towards network users. They also should provide incentives, e.g. to reduce peak load in periods of network congestion. Inappropriate network tariffs can hinder the provision of flexibility and thereby become a barrier towards system integration of renewable. Against this background, this report presents a systematic review of the German network tariff system and a discussion of several options to adapt the network tarif system in order to support the energy transition. The following aspects are analyzed: An adjustment of the privileges for industrial users to increase potential network benefits and reduce barriers towards a more market oriented behaviour. The payments for avoided network charges to distributed generation, that do not reflect cost reality in distribution networks anymore. Uniform transmission network tariffs as an option for a more appropriate allocation of cost associated with the energy transition. Increased standing fees in low voltage networks as an option to increase the cost-contribution of users with self-generation to network financing. Generator tariffs, to allocate a share of network cost to generators and provide incentives for network oriented location choice and/or feed-in.

  12. Lung involvement quantification in chest radiographs; Quantificacao de comprometimento pulmonar em radiografias de torax

    Energy Technology Data Exchange (ETDEWEB)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M., E-mail: giacomini@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2014-12-15

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  13. Quantification of competitive value of documents

    Directory of Open Access Journals (Sweden)

    Pavel Šimek

    2009-01-01

    Full Text Available The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.

  14. DETECTION AND QUANTIFICATION OF COW FECAL POLLUTION WITH REAL-TIME PCR

    Science.gov (United States)

    Assessment of health risk and fecal bacteria loads associated with cow fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for enumeration of two recently described cow-specific g...

  15. Automated Quantification of Pneumothorax in CT

    Science.gov (United States)

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  16. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Science.gov (United States)

    Čeko, Marta; Gracely, John L; Fitzcharles, Mary-Ann; Seminowicz, David A; Schweinhardt, Petra; Bushnell, M Catherine

    2015-08-19

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or "negative" [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient. We studied the

  17. Islanded operation of distributed networks

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    This report summarises the findings of a study to investigate the regulatory, commercial and technical risks and benefits associated with the operation of distributed generation to power an islanded section of distributed network. A review of published literature was carried out, and UK generators were identified who could operate as part of an island network under the existing technical, regulatory, and safety framework. Agreement on case studies for consideration with distributed network operators (DNOs) is discussed as well as the quantification of the risks, benefits and costs of islanding, and the production of a case implementation plan for each case study. Technical issues associated with operating sections of network in islanded mode are described, and impacts of islanding on trading and settlement, and technical and commercial modelling are explored.

  18. Islanded operation of distributed networks

    International Nuclear Information System (INIS)

    2005-01-01

    This report summarises the findings of a study to investigate the regulatory, commercial and technical risks and benefits associated with the operation of distributed generation to power an islanded section of distributed network. A review of published literature was carried out, and UK generators were identified who could operate as part of an island network under the existing technical, regulatory, and safety framework. Agreement on case studies for consideration with distributed network operators (DNOs) is discussed as well as the quantification of the risks, benefits and costs of islanding, and the production of a case implementation plan for each case study. Technical issues associated with operating sections of network in islanded mode are described, and impacts of islanding on trading and settlement, and technical and commercial modelling are explored

  19. Network and system diagrams revisited: Satisfying CEA requirements for causality analysis

    International Nuclear Information System (INIS)

    Perdicoulis, Anastassios; Piper, Jake

    2008-01-01

    Published guidelines for Cumulative Effects Assessment (CEA) have called for the identification of cause-and-effect relationships, or causality, challenging researchers to identify methods that can possibly meet CEA's specific requirements. Together with an outline of these requirements from CEA key literature, the various definitions of cumulative effects point to the direction of a method for causality analysis that is visually-oriented and qualitative. This article consequently revisits network and system diagrams, resolves their reported shortcomings, and extends their capabilities with causal loop diagramming methodology. The application of the resulting composite causality analysis method to three Environmental Impact Assessment (EIA) case studies appears to satisfy the specific requirements of CEA regarding causality. Three 'moments' are envisaged for the use of the proposed method: during the scoping stage, during the assessment process, and during the stakeholder participation process

  20. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  1. Uncovering the underlying physical mechanisms of biological systems via quantification of landscape and flux

    International Nuclear Information System (INIS)

    Xu Li; Chu Xiakun; Yan Zhiqiang; Zheng Xiliang; Zhang Kun; Zhang Feng; Yan Han; Wu Wei; Wang Jin

    2016-01-01

    In this review, we explore the physical mechanisms of biological processes such as protein folding and recognition, ligand binding, and systems biology, including cell cycle, stem cell, cancer, evolution, ecology, and neural networks. Our approach is based on the landscape and flux theory for nonequilibrium dynamical systems. This theory provides a unifying principle and foundation for investigating the underlying mechanisms and physical quantification of biological systems. (topical review)

  2. Corporate Data Network (CDN). Data Requirements Task. Preliminary Strategic Data Plan. Volume 4

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol.2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol.3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  3. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    Science.gov (United States)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  4. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography

    Science.gov (United States)

    Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2018-01-01

    We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301

  5. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  6. Mean precipitation estimation, rain gauge network evaluation and quantification of the hydrologic balance in the River Quito basin in Choco, state of Colombia

    International Nuclear Information System (INIS)

    Cordoba, Samir; Zea, Jorge A; Murillo, W

    2006-01-01

    In this work the calculation of the average precipitation in the Quito River basin, state of Choco, Colombia, is presents through diverse techniques, among which are those suggested by Thiessen and those based on the isohyets analysis, in order to select the one appropriate to quantification of rainwater available to the basin. Also included is an estimation of the error with which the average precipitation in the zone studied is fraught when measured, by means of the methodology proposed by Gandin (1970) and Kagan (WMO, 1966), which at the same time allows to evaluate the representativeness of each one of the stations that make up the rain gauge network in the area. The study concludes with a calculation of the hydrologic balance for the Quito river basin based on the pilot procedure suggested in the UNESCO publication on the study of the South America hydrologic balance, from which the great contribution of rainfall to a greatly enhanced run-off may be appreciated

  7. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Science.gov (United States)

    Čeko, Marta; Gracely, John L.; Fitzcharles, Mary-Ann; Seminowicz, David A.; Schweinhardt, Petra

    2015-01-01

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or “negative” [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient

  8. Cytochrome c oxidase subunit 1-based human RNA quantification to enhance mRNA profiling in forensic biology

    Directory of Open Access Journals (Sweden)

    Dong Zhao

    2017-01-01

    Full Text Available RNA analysis offers many potential applications in forensic science, and molecular identification of body fluids by analysis of cell-specific RNA markers represents a new technique for use in forensic cases. However, due to the nature of forensic materials that often admixed with nonhuman cellular components, human-specific RNA quantification is required for the forensic RNA assays. Quantification assay for human RNA has been developed in the present study with respect to body fluid samples in forensic biology. The quantitative assay is based on real-time reverse transcription-polymerase chain reaction of mitochondrial RNA cytochrome c oxidase subunit I and capable of RNA quantification with high reproducibility and a wide dynamic range. The human RNA quantification improves the quality of mRNA profiling in the identification of body fluids of saliva and semen because the quantification assay can exclude the influence of nonhuman components and reduce the adverse affection from degraded RNA fragments.

  9. Mixture quantification using PLS in plastic scintillation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bagan, H.; Tarancon, A.; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F., E-mail: jfgarcia@ub.ed [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2011-06-15

    This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ({sup 241}Am, {sup 137}Cs and {sup 90}Sr/{sup 90}Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter.

  10. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    Science.gov (United States)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  12. Data-independent MS/MS quantification of neuropeptides for determination of putative feeding-related neurohormones in microdialysate.

    Science.gov (United States)

    Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun

    2015-01-21

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.

  13. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  14. Detection and quantification of beef and pork materials in meat products by duplex droplet digital PCR

    OpenAIRE

    Cai, Yicun; He, Yuping; Lv, Rong; Chen, Hongchao; Wang, Qiang; Pan, Liangwen

    2017-01-01

    Meat products often consist of meat from multiple animal species, and inaccurate food product adulteration and mislabeling can negatively affect consumers. Therefore, a cost-effective and reliable method for identification and quantification of animal species in meat products is required. In this study, we developed a duplex droplet digital PCR (dddPCR) detection and quantification system to simultaneously identify and quantify the source of meat in samples containing a mixture of beef (Bos t...

  15. Klasifikasi Paket Jaringan Berbasis Analisis Statistik dan Neural Network

    Directory of Open Access Journals (Sweden)

    Harsono Harsono

    2018-01-01

    Full Text Available Distributed Denial-of-Service (DDoS is one of network attack technique which increased every year, especially in both of intensity and volume. DDoS attacks are still one of the world's major Internet threats and become a major problem of cyber-world security. Research in this paper aims to establish a new approach on network packets classification, which can be a basis for framework development on Distributed Denial-of-Service (DDoS attack detection systems. The proposed approach to solving the problem on network packet classification is by combining statistical data quantification methods with neural network methods. Based on the test, it is found that the average percentage of neural network classification accuracy against network data packet is 92.99%.

  16. A nuclear DNA-based species determination and DNA quantification assay for common poultry species.

    Science.gov (United States)

    Ng, J; Satkoski, J; Premasuthan, A; Kanthaswamy, S

    2014-12-01

    DNA testing for food authentication and quality control requires sensitive species-specific quantification of nuclear DNA from complex and unknown biological sources. We have developed a multiplex assay based on TaqMan® real-time quantitative PCR (qPCR) for species-specific detection and quantification of chicken (Gallus gallus), duck (Anas platyrhynchos), and turkey (Meleagris gallopavo) nuclear DNA. The multiplex assay is able to accurately detect very low quantities of species-specific DNA from single or multispecies sample mixtures; its minimum effective quantification range is 5 to 50 pg of starting DNA material. In addition to its use in food fraudulence cases, we have validated the assay using simulated forensic sample conditions to demonstrate its utility in forensic investigations. Despite treatment with potent inhibitors such as hematin and humic acid, and degradation of template DNA by DNase, the assay was still able to robustly detect and quantify DNA from each of the three poultry species in mixed samples. The efficient species determination and accurate DNA quantification will help reduce fraudulent food labeling and facilitate downstream DNA analysis for genetic identification and traceability.

  17. Quantification, challenges and outlook of PV integration in the power system: a review by the European PV Technology Platform

    DEFF Research Database (Denmark)

    Alet, Pierre-Jean; Baccaro, Federica; De Felice, Matteo

    2015-01-01

    Integration in the power system has become a limiting factor to the further development of photovoltaics. Proper quantification is needed to evaluate both issues and solutions; the share of annual electricity demand is widely used but we found that some of the metrics which are related to power...... rather than energy better reflect the impact on networks. Barriers to wider deployment of PV into power grids can be split between local technical issues (voltage levels, harmonics distortion, reverse power flows and transformer loading) and system-wide issues (intermittency, reduction of system...... resilience). Many of the technical solutions to these issues rely on the inverters as actuators (e.g., for control of active and reactive power) or as interfaces (e.g., for local storage). This role requires further technical standardisation and needs to be taken into account in the planning of power...

  18. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    Science.gov (United States)

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  19. Effects of humic acid on DNA quantification with Quantifiler® Human DNA Quantification kit and short tandem repeat amplification efficiency.

    Science.gov (United States)

    Seo, Seung Bum; Lee, Hye Young; Zhang, Ai Hua; Kim, Hye Yeon; Shin, Dong Hoon; Lee, Soong Deok

    2012-11-01

    Correct DNA quantification is an essential part to obtain reliable STR typing results. Forensic DNA analysts often use commercial kits for DNA quantification; among them, real-time-based DNA quantification kits are most frequently used. Incorrect DNA quantification due to the presence of PCR inhibitors may affect experiment results. In this study, we examined the alteration degree of DNA quantification results estimated in DNA samples containing a PCR inhibitor by using a Quantifiler® Human DNA Quantification kit. For experiments, we prepared approximately 0.25 ng/μl DNA samples containing various concentrations of humic acid (HA). The quantification results were 0.194-0.303 ng/μl at 0-1.6 ng/μl HA (final concentration in the Quantifiler reaction) and 0.003-0.168 ng/μl at 2.4-4.0 ng/μl HA. Most DNA quantity was undetermined when HA concentration was higher than 4.8 ng/μl HA. The C (T) values of an internal PCR control (IPC) were 28.0-31.0, 36.5-37.1, and undetermined at 0-1.6, 2.4, and 3.2 ng/μl HA. These results indicate that underestimated DNA quantification results may be obtained in the DNA sample with high C (T) values of IPC. Thus, researchers should carefully interpret the DNA quantification results. We additionally examined the effects of HA on the STR amplification by using an Identifiler® kit and a MiniFiler™ kit. Based on the results of this study, it is thought that a better understanding of various effects of HA would help researchers recognize and manipulate samples containing HA.

  20. Preliminary Magnitude of Completeness Quantification of Improved BMKG Catalog (2008-2016) in Indonesian Region

    Science.gov (United States)

    Diantari, H. C.; Suryanto, W.; Anggraini, A.; Irnaka, T. M.; Susilanto, P.; Ngadmanto, D.

    2018-03-01

    We present a magnitude of completeness (Mc) quantification based on BMKG improved earthquake catalog which generated from Ina-TEWS seismograph network. The Mc quantification can help us determine the lowest magnitude which can be recorded perfectly as a function of space and time. We use the BMKG improved earthquake catalog from 2008 to 2016 which has been converted to moment magnitude (Mw) and declustered. The value of Mc is computed by determining the initial point of deviation patterns in Frequency Magnitude Distribution (FMD) chart following the Gutenberg-Richter equations. In the next step, we calculate the temporal variation of Mc and b-value using maximum likelihood method annually. We found that the Mc value is decreasing and produced a varying b-value. It indicates that the development of seismograph network from 2008 to 2016 can affect the value of Mc although it is not significant. We analyze temporal variation of Mc value, and correlate it with the spatial distribution of seismograph in Indonesia. The spatial distribution of seismograph installation shows that the western part of Indonesia has more dense seismograph compared to the eastern region. However, the eastern part of Indonesia has a high level of seismicity compared to the western region. Based upon the results, additional seismograph installation in the eastern part of Indonesia should be taken into consideration.

  1. Parsing and Quantification of Raw Orbitrap Mass Spectrometer Data Using RawQuant.

    Science.gov (United States)

    Kovalchik, Kevin A; Moggridge, Sophie; Chen, David D Y; Morin, Gregg B; Hughes, Christopher S

    2018-06-01

    Effective analysis of protein samples by mass spectrometry (MS) requires careful selection and optimization of a range of experimental parameters. As the output from the primary detection device, the "raw" MS data file can be used to gauge the success of a given sample analysis. However, the closed-source nature of the standard raw MS file can complicate effective parsing of the data contained within. To ease and increase the range of analyses possible, the RawQuant tool was developed to enable parsing of raw MS files derived from Thermo Orbitrap instruments to yield meta and scan data in an openly readable text format. RawQuant can be commanded to export user-friendly files containing MS 1 , MS 2 , and MS 3 metadata as well as matrices of quantification values based on isobaric tagging approaches. In this study, the utility of RawQuant is demonstrated in several scenarios: (1) reanalysis of shotgun proteomics data for the identification of the human proteome, (2) reanalysis of experiments utilizing isobaric tagging for whole-proteome quantification, and (3) analysis of a novel bacterial proteome and synthetic peptide mixture for assessing quantification accuracy when using isobaric tags. Together, these analyses successfully demonstrate RawQuant for the efficient parsing and quantification of data from raw Thermo Orbitrap MS files acquired in a range of common proteomics experiments. In addition, the individual analyses using RawQuant highlights parametric considerations in the different experimental sets and suggests targetable areas to improve depth of coverage in identification-focused studies and quantification accuracy when using isobaric tags.

  2. How women organize social networks different from men

    Science.gov (United States)

    Szell, Michael; Thurner, Stefan

    2013-01-01

    Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways. PMID:23393616

  3. How women organize social networks different from men.

    Science.gov (United States)

    Szell, Michael; Thurner, Stefan

    2013-01-01

    Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways.

  4. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  5. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  6. Quantification of Discrete Oxide and Sulfur Layers on Sulfur-Passivated InAs by XPS

    National Research Council Canada - National Science Library

    Petrovykh, D. Y; Sullivan, J. M; Whitman, L. J

    2005-01-01

    .... The S-passivated InAs(001) surface can be modeled as a sulfur-indium-arsenic layer-cake structure, such that characterization requires quantification of both arsenic oxide and sulfur layers that are at most a few monolayers thick...

  7. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  8. Validation and quantification of uncertainty in coupled climate models using network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bracco, Annalisa [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-08-10

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets

  9. Quantified academic selves : The gamification of science through social networking services

    NARCIS (Netherlands)

    Hammarfelt, B.M.S.; Rijcke, de S.; Rushforth, A.D.

    2016-01-01

    Introduction. Our study critically engages with techniques of self-quantification in contemporary academia, by demonstrating how social networking services enact research and scholarly communication as a 'game'. Method. The empirical part of the study involves an analysis of two leading platforms:

  10. Rigid 3D-3D registration of TOF MRA integrating vessel segmentation for quantification of recurrence volumes after coiling cerebral aneurysm

    International Nuclear Information System (INIS)

    Saering, Dennis; Forkert, Nils Daniel; Fiehler, Jens; Ries, Thorsten

    2012-01-01

    A fast and reproducible quantification of the recurrence volume of coiled aneurysms is required to enable a more timely evaluation of new coils. This paper presents two registration schemes for the semi-automatic quantification of aneurysm recurrence volumes based on baseline and follow-up 3D MRA TOF datasets. The quantification of shape changes requires a previous definition of corresponding structures in both datasets. For this, two different rigid registration methods have been developed and evaluated. Besides a state-of-the-art rigid registration method, a second approach integrating vessel segmentations is presented. After registration, the aneurysm recurrence volume can be calculated based on the difference image. The computed volumes were compared to manually extracted volumes. An evaluation based on 20 TOF MRA datasets (baseline and follow-up) of ten patients showed that both registration schemes are generally capable of providing sufficient registration results. Regarding the quantification of aneurysm recurrence volumes, the results suggest that the second segmentation-based registration method yields better results, while a reduction of the computation and interaction time is achieved at the same time. The proposed registration scheme incorporating vessel segmentation enables an improved quantification of recurrence volumes of coiled aneurysms with reduced computation and interaction time. (orig.)

  11. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  12. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against...... human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...

  13. Quantification procedures in micro X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Kanngiesser, Birgit

    2003-01-01

    For the quantification in micro X-ray fluorescence analysis standardfree quantification procedures have become especially important. An introduction to the basic concepts of these quantification procedures is given, followed by a short survey of the procedures which are available now and what kind of experimental situations and analytical problems are addressed. The last point is extended by the description of an own development for the fundamental parameter method, which renders the inclusion of nonparallel beam geometries possible. Finally, open problems for the quantification procedures are discussed

  14. Reliability modeling of safety-critical network communication in a digitalized nuclear power plant

    International Nuclear Information System (INIS)

    Lee, Sang Hun; Kim, Hee Eun; Son, Kwang Seop; Shin, Sung Min; Lee, Seung Jun; Kang, Hyun Gook

    2015-01-01

    The Engineered Safety Feature-Component Control System (ESF-CCS), which uses a network communication system for the transmission of safety-critical information from group controllers (GCs) to loop controllers (LCs), was recently developed. However, the ESF-CCS has not been applied to nuclear power plants (NPPs) because the network communication failure risk in the ESF-CCS has yet to be fully quantified. Therefore, this study was performed to identify the potential hazardous states for network communication between GCs and LCs and to develop quantification schemes for various network failure causes. To estimate the risk effects of network communication failures in the ESF-CCS, a fault-tree model of an ESF-CCS signal failure in the containment spray actuation signal condition was developed for the case study. Based on a specified range of periodic inspection periods for network modules and the baseline probability of software failure, a sensitivity study was conducted to analyze the risk effect of network failure between GCs and LCs on ESF-CCS signal failure. This study is expected to provide insight into the development of a fault-tree model for network failures in digital I&C systems and the quantification of the risk effects of network failures for safety-critical information transmission in NPPs. - Highlights: • Network reliability modeling framework for digital I&C system in NPP is proposed. • Hazardous states of network protocol between GC and LC in ESF-CCS are identified. • Fault-tree model of ESF-CCS signal failure in ESF actuation condition is developed. • Risk effect of network failure on ESF-CCS signal failure is analyzed.

  15. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  16. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  17. 47 CFR 27.16 - Network access requirements for Block C in the 746-757 and 776-787 MHz bands.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Network access requirements for Block C in the 746-757 and 776-787 MHz bands. 27.16 Section 27.16 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... network restrictions on industry-wide consensus standards, such restrictions would be presumed reasonable...

  18. Quantification of arbuscular mycorrhizal fungal DNA in roots: how important is material preservation?

    Science.gov (United States)

    Janoušková, Martina; Püschel, David; Hujslová, Martina; Slavíková, Renata; Jansa, Jan

    2015-04-01

    Monitoring populations of arbuscular mycorrhizal fungi (AMF) in roots is a pre-requisite for improving our understanding of AMF ecology and functioning of the symbiosis in natural conditions. Among other approaches, quantification of fungal DNA in plant tissues by quantitative real-time PCR is one of the advanced techniques with a great potential to process large numbers of samples and to deliver truly quantitative information. Its application potential would greatly increase if the samples could be preserved by drying, but little is currently known about the feasibility and reliability of fungal DNA quantification from dry plant material. We addressed this question by comparing quantification results based on dry root material to those obtained from deep-frozen roots of Medicago truncatula colonized with Rhizophagus sp. The fungal DNA was well conserved in the dry root samples with overall fungal DNA levels in the extracts comparable with those determined in extracts of frozen roots. There was, however, no correlation between the quantitative data sets obtained from the two types of material, and data from dry roots were more variable. Based on these results, we recommend dry material for qualitative screenings but advocate using frozen root materials if precise quantification of fungal DNA is required.

  19. Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD

    Directory of Open Access Journals (Sweden)

    Sabine Henders

    2012-01-01

    Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.

  20. Surface Enhanced Raman Spectroscopy (SERS) methods for endpoint and real-time quantification of miRNA assays

    Science.gov (United States)

    Restaino, Stephen M.; White, Ian M.

    2017-03-01

    Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.

  1. Real-Time, Interactive Echocardiography Over High-Speed Networks: Feasibility and Functional Requirements

    Science.gov (United States)

    Bobinsky, Eric A.

    1998-01-01

    Real-time, Interactive Echocardiography Over High Speed Networks: Feasibility and Functional Requirements is an experiment in advanced telemedicine being conducted jointly by the NASA Lewis Research Center, the NASA Ames Research Center, and the Cleveland Clinic Foundation. In this project, a patient undergoes an echocardiographic examination in Cleveland while being diagnosed remotely by a cardiologist in California viewing a real-time display of echocardiographic video images transmitted over the broadband NASA Research and Education Network (NREN). The remote cardiologist interactively guides the sonographer administering the procedure through a two-way voice link between the two sites. Echocardiography is a noninvasive medical technique that applies ultrasound imaging to the heart, providing a "motion picture" of the heart in action. Normally, echocardiographic examinations are performed by a sonographer and cardiologist who are located in the same medical facility as the patient. The goal of telemedicine is to allow medical specialists to examine patients located elsewhere, typically in remote or medically underserved geographic areas. For example, a small, rural clinic might have access to an echocardiograph machine but not a cardiologist. By connecting this clinic to a major metropolitan medical facility through a communications network, a minimally trained technician would be able to carry out the procedure under the supervision and guidance of a qualified cardiologist.

  2. Labeling the pulmonary arterial tree in CT images for automatic quantification of pulmonary embolism

    NARCIS (Netherlands)

    Peters, R.J.M.; Marquering, H.A.; Dogan, H.; Hendriks, E.A.; De Roos, A.; Reiber, J.H.C.; Stoel, B.C.

    2007-01-01

    Contrast-enhanced CT Angiography has become an accepted diagnostic tool for detecting Pulmonary Embolism (PE). The CT obstruction index proposed by Qanadli, which is based on the number of obstructed arterial segments, enables the quantification of PE severity. Because the required manual

  3. Evolution of a residue laboratory network and the management tools for monitoring its performance.

    Science.gov (United States)

    Lins, E S; Conceição, E S; Mauricio, A De Q

    2012-01-01

    Since 2005 the National Residue & Contaminants Control Plan (NRCCP) in Brazil has been considerably enhanced, increasing the number of samples, substances and species monitored, and also the analytical detection capability. The Brazilian laboratory network was forced to improve its quality standards in order to comply with the NRCP's own evolution. Many aspects such as the limits of quantification (LOQs), the quality management systems within the laboratories and appropriate method validation are in continuous improvement, generating new scenarios and demands. Thus, efficient management mechanisms for monitoring network performance and its adherence to the established goals and guidelines are required. Performance indicators associated to computerised information systems arise as a powerful tool to monitor the laboratories' activity, making use of different parameters to describe this activity on a day-to-day basis. One of these parameters is related to turnaround times, and this factor is highly affected by the way each laboratory organises its management system, as well as the regulatory requirements. In this paper a global view is presented of the turnaround times related to the type of analysis, laboratory, number of samples per year, type of matrix, country region and period of the year, all these data being collected from a computerised system called SISRES. This information gives a solid background to management measures aiming at the improvement of the service offered by the laboratory network.

  4. Network Skewness Measures Resilience in Lake Ecosystems

    Science.gov (United States)

    Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.

    2017-12-01

    Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.

  5. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    Science.gov (United States)

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Structure determination of electrodeposited zinc-nickel alloys: thermal stability and quantification using XRD and potentiodynamic dissolution

    International Nuclear Information System (INIS)

    Fedi, B.; Gigandet, M.P.; Hihn, J-Y; Mierzejewski, S.

    2016-01-01

    Highlights: • Quantification of zinc-nickel phases between 1,2% and 20%. • Coupling XRD to partial potentiodynamic dissolution. • Deconvolution of anodic stripping curves. • Phase quantification after annealing. - Abstract: Electrodeposited zinc-nickel coatings obtained by electrodeposition reveal the presence of metastable phases in various quantities, thus requiring their identification, a study of their thermal stability, and, finally, determination of their respective proportions. By combining XRD measurement with partial potentiodynamic dissolution, anodic peaks were indexed to allow their quantification. Quantification of electrodeposited zinc-nickel alloys approximately 10 μm thick was thus carried out on nickel content between 1.2% and 20%, and exhibited good accuracy. This method was then extended to the same set of alloys after annealing (250 °C, 2 h), thus bringing the structural organization closer to its thermodynamic equilibrium. The result obtained ensures better understanding of crystallization of metastable phases and of phase proportion evolution in a bi-phasic zinc-nickel coating. Finally, the presence of a monophase γ and its thermal stability in the 12% to 15% range provides important information for coating anti-corrosion behavior.

  7. Comparison of viable plate count, turbidity measurement and real-time PCR for quantification of Porphyromonas gingivalis.

    Science.gov (United States)

    Clais, S; Boulet, G; Van Kerckhoven, M; Lanckacker, E; Delputte, P; Maes, L; Cos, P

    2015-01-01

    The viable plate count (VPC) is considered as the reference method for bacterial enumeration in periodontal microbiology but shows some important limitations for anaerobic bacteria. As anaerobes such as Porphyromonas gingivalis are difficult to culture, VPC becomes time-consuming and less sensitive. Hence, efficient normalization of experimental data to bacterial cell count requires alternative rapid and reliable quantification methods. This study compared the performance of VPC with that of turbidity measurement and real-time PCR (qPCR) in an experimental context using highly concentrated bacterial suspensions. Our TaqMan-based qPCR assay for P. gingivalis 16S rRNA proved to be sensitive and specific. Turbidity measurements offer a fast method to assess P. gingivalis growth, but suffer from high variability and a limited dynamic range. VPC was very time-consuming and less repeatable than qPCR. Our study concludes that qPCR provides the most rapid and precise approach for P. gingivalis quantification. Although our data were gathered in a specific research context, we believe that our conclusions on the inferior performance of VPC and turbidity measurements in comparison to qPCR can be extended to other research and clinical settings and even to other difficult-to-culture micro-organisms. Various clinical and research settings require fast and reliable quantification of bacterial suspensions. The viable plate count method (VPC) is generally seen as 'the gold standard' for bacterial enumeration. However, VPC-based quantification of anaerobes such as Porphyromonas gingivalis is time-consuming due to their stringent growth requirements and shows poor repeatability. Comparison of VPC, turbidity measurement and TaqMan-based qPCR demonstrated that qPCR possesses important advantages regarding speed, accuracy and repeatability. © 2014 The Society for Applied Microbiology.

  8. Hepatic Iron Quantification on 3 Tesla (3 T Magnetic Resonance (MR: Technical Challenges and Solutions

    Directory of Open Access Journals (Sweden)

    Muhammad Anwar

    2013-01-01

    Full Text Available MR has become a reliable and noninvasive method of hepatic iron quantification. Currently, most of the hepatic iron quantification is performed on 1.5 T MR, and the biopsy measurements have been paired with R2 and R2* values for 1.5 T MR. As the use of 3 T MR scanners is steadily increasing in clinical practice, it has become important to evaluate the practicality of calculating iron burden at 3 T MR. Hepatic iron quantification on 3 T MR requires a better understanding of the process and more stringent technical considerations. The purpose of this work is to focus on the technical challenges in establishing a relationship between T2* values at 1.5 T MR and 3 T MR for hepatic iron concentration (HIC and to develop an appropriately optimized MR protocol for the evaluation of T2* values in the liver at 3 T magnetic field strength. We studied 22 sickle cell patients using multiecho fast gradient-echo sequence (MFGRE 3 T MR and compared the results with serum ferritin and liver biopsy results. Our study showed that the quantification of hepatic iron on 3 T MRI in sickle cell disease patients correlates well with clinical blood test results and biopsy results. 3 T MR liver iron quantification based on MFGRE can be used for hepatic iron quantification in transfused patients.

  9. Repeatability of Bolus Kinetics Ultrasound Perfusion Imaging for the Quantification of Cerebral Blood Flow

    NARCIS (Netherlands)

    Vinke, Elisabeth J.; Eyding, Jens; de Korte, Chris L.; Slump, Cornelis H.; van der Hoeven, Johannes G.; Hoedemaekers, Cornelia W.E.

    2017-01-01

    Ultrasound perfusion imaging (UPI) can be used for the quantification of cerebral perfusion. In a neuro-intensive care setting, repeated measurements are required to evaluate changes in cerebral perfusion and monitor therapy. The aim of this study was to determine the repeatability of UPI in

  10. Suppression of anomalous synchronization and nonstationary behavior of neural network under small-world topology

    Science.gov (United States)

    Boaretto, B. R. R.; Budzinski, R. C.; Prado, T. L.; Kurths, J.; Lopes, S. R.

    2018-05-01

    It is known that neural networks under small-world topology can present anomalous synchronization and nonstationary behavior for weak coupling regimes. Here, we propose methods to suppress the anomalous synchronization and also to diminish the nonstationary behavior occurring in weakly coupled neural network under small-world topology. We consider a network of 2000 thermally sensitive identical neurons, based on the model of Hodgkin-Huxley in a small-world topology, with the probability of adding non local connection equal to p = 0 . 001. Based on experimental protocols to suppress anomalous synchronization, as well as nonstationary behavior of the neural network dynamics, we make use of (i) external stimulus (pulsed current); (ii) biologic parameters changing (neuron membrane conductance changes); and (iii) body temperature changes. Quantification analysis to evaluate phase synchronization makes use of the Kuramoto's order parameter, while recurrence quantification analysis, particularly the determinism, computed over the easily accessible mean field of network, the local field potential (LFP), is used to evaluate nonstationary states. We show that the methods proposed can control the anomalous synchronization and nonstationarity occurring for weak coupling parameter without any effect on the individual neuron dynamics, neither in the expected asymptotic synchronized states occurring for large values of the coupling parameter.

  11. NASA's Proposed Requirements for the Global Aeronautical Network and a Summary of Responses

    Science.gov (United States)

    Ivancic, William D.

    2005-01-01

    In October 2003, NASA embarked on the ACAST project (Advanced CNS Architectures and System Technologies) to perform research and development on selected communications, navigation, and surveillance (CNS) technologies to enhance the performance of the National Airspace System (NAS). The Networking Research Group of NASA's ACAST project, in order to ensure global interoperability and deployment, formulated their own salient list of requirements. Many of these are not necessarily of concern to the FAA, but are a concern to those who have to deploy, operate, and pay for these systems. These requirements were submitted to the world s industries, governments, and academic institutions for comments. The results of that request for comments are summarized in this paper.

  12. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Science.gov (United States)

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  13. Quantification of phosphorus in single cells using synchrotron X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Núñez-Milland, Daliángelis R. [Department of Chemistry and Biochemistry, University of South Carolina, Columbia, SC 29208 (United States); Baines, Stephen B. [Department of Ecology and Evolution, Stony Brook University, Stony Brook, NY 11755 (United States); Vogt, Stefan [Experimental Facilities Division, Advanced Photon Source, Argonne National Laboratory, Argonne, IL (United States); Twining, Benjamin S., E-mail: btwining@bigelow.org [Department of Chemistry and Biochemistry, University of South Carolina, Columbia, SC 29208 (United States)

    2010-07-01

    Phosphorus abundance was quantified in individual phytoplankton cells by synchrotron X-ray fluorescence and compared with bulk spectrophotometric measurements to confirm accuracy of quantification. Figures of merit for P quantification on three different types of transmission electron microscopy grids are compared to assess possible interferences. Phosphorus is required for numerous cellular compounds and as a result can serve as a useful proxy for total cell biomass in studies of cell elemental composition. Single-cell analysis by synchrotron X-ray fluorescence (SXRF) enables quantitative and qualitative analyses of cell elemental composition with high elemental sensitivity. Element standards are required to convert measured X-ray fluorescence intensities into element concentrations, but few appropriate standards are available, particularly for the biologically important element P. Empirical P conversion factors derived from other elements contained in certified thin-film standards were used to quantify P in the model diatom Thalassiosira pseudonana, and the measured cell quotas were compared with those measured in bulk by spectrophotometry. The mean cellular P quotas quantified with SXRF for cells on Au, Ni and nylon grids using this approach were not significantly different from each other or from those measured spectrophotometrically. Inter-cell variability typical of cell populations was observed. Additionally, the grid substrates were compared for their suitability to P quantification based on the potential for spectral interferences with P. Nylon grids were found to have the lowest background concentrations and limits of detection for P, while background concentrations in Ni and Au grids were 1.8- and 6.3-fold higher. The advantages and disadvantages of each grid type for elemental analysis of individual phytoplankton cells are discussed.

  14. Spatial gene expression quantification: a tool for analysis of in situ hybridizations in sea anemone Nematostella vectensis

    Directory of Open Access Journals (Sweden)

    Botman Daniel

    2012-10-01

    Full Text Available Abstract Background Spatial gene expression quantification is required for modeling gene regulation in developing organisms. The fruit fly Drosophila melanogaster is the model system most widely applied for spatial gene expression analysis due to its unique embryonic properties: the shape does not change significantly during its early cleavage cycles and most genes are differentially expressed along a straight axis. This system of development is quite exceptional in the animal kingdom. In the sea anemone Nematostella vectensis the embryo changes its shape during early development; there are cell divisions and cell movement, like in most other metazoans. Nematostella is an attractive case study for spatial gene expression since its transparent body wall makes it accessible to various imaging techniques. Findings Our new quantification method produces standardized gene expression profiles from raw or annotated Nematostella in situ hybridizations by measuring the expression intensity along its cell layer. The procedure is based on digital morphologies derived from high-resolution fluorescence pictures. Additionally, complete descriptions of nonsymmetric expression patterns have been constructed by transforming the gene expression images into a three-dimensional representation. Conclusions We created a standard format for gene expression data, which enables quantitative analysis of in situ hybridizations from embryos with various shapes in different developmental stages. The obtained expression profiles are suitable as input for optimization of gene regulatory network models, and for correlation analysis of genes from dissimilar Nematostella morphologies. This approach is potentially applicable to many other metazoan model organisms and may also be suitable for processing data from three-dimensional imaging techniques.

  15. Quantified Academic Selves: The Gamification of Research through Social Networking Services

    Science.gov (United States)

    Hammarfelt, Björn; de Rijcke, Sarah; Rushforth, Alexander D.

    2016-01-01

    Introduction: Our study critically engages with techniques of self-quantification in contemporary academia, by demonstrating how social networking services enact research and scholarly communication as a "game". Method: The empirical part of the study involves an analysis of two leading platforms: Impactstory and ResearchGate. Observed…

  16. A "Toy" Model for Operational Risk Quantification using Credibility Theory

    OpenAIRE

    Hans B\\"uhlmann; Pavel V. Shevchenko; Mario V. W\\"uthrich

    2009-01-01

    To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses exceeding some high thr...

  17. Metal Stable Isotope Tagging: Renaissance of Radioimmunoassay for Multiplex and Absolute Quantification of Biomolecules.

    Science.gov (United States)

    Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong

    2016-05-17

    is the development and application of the mass cytometer, which fully exploited the multiplexing potential of metal stable isotope tagging. It realized the simultaneous detection of dozens of parameters in single cells, accurate immunophenotyping in cell populations, through modeling of intracellular signaling network and undoubted discrimination of function and connection of cell subsets. Metal stable isotope tagging has great potential applications in hematopoiesis, immunology, stem cells, cancer, and drug screening related research and opened a post-fluorescence era of cytometry. Herein, we review the development of biomolecule quantification using metal stable isotope tagging. Particularly, the power of multiplex and absolute quantification is demonstrated. We address the advantages, applicable situations, and limitations of metal stable isotope tagging strategies and propose suggestions for future developments. The transfer of enzymatic or fluorescent tagging to metal stable isotope tagging may occur in many aspects of biological and clinical practices in the near future, just as the revolution from radioactive isotope tagging to fluorescent tagging happened in the past.

  18. Direct quantification of creatinine in human urine by using isotope dilution extractive electrospray ionization tandem mass spectrometry

    International Nuclear Information System (INIS)

    Li Xue; Fang Xiaowei; Yu Zhiqiang; Sheng Guoying; Wu Minghong; Fu Jiamo; Chen Huanwen

    2012-01-01

    Highlights: ► High throughput analysis of urinary creatinine is achieved by using ID-EESI–MS/MS. ► Urine sample is directly analyzed and no sample pre-treatment is required. ► Accurate quantification is accomplished with isotope dilution technique. - Abstract: Urinary creatinine (CRE) is an important biomarker of renal function. Fast and accurate quantification of CRE in human urine is required by clinical research. By using isotope dilution extractive electrospray ionization tandem mass spectrometry (EESI–MS/MS) a high throughput method for direct and accurate quantification of urinary CRE was developed in this study. Under optimized conditions, the method detection limit was lower than 50 μg L −1 . Over the concentration range investigated (0.05–10 mg L −1 ), the calibration curve was obtained with satisfactory linearity (R 2 = 0.9861), and the relative standard deviation (RSD) values for CRE and isotope-labeled CRE (CRE-d3) were 7.1–11.8% (n = 6) and 4.1–11.3% (n = 6), respectively. The isotope dilution EESI–MS/MS method was validated by analyzing six human urine samples, and the results were comparable with the conventional spectrophotometric method (based on the Jaffe reaction). Recoveries for individual urine samples were 85–111% and less than 0.3 min was taken for each measurement, indicating that the present isotope dilution EESI–MS/MS method is a promising strategy for the fast and accurate quantification of urinary CRE in clinical laboratories.

  19. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  20. Comprehensive evaluation index system of total supply capability in distribution network

    Science.gov (United States)

    Zhang, Linyao; Wu, Guilian; Yang, Jingyuan; Jia, Shuangrui; Zhang, Wei; Sun, Weiqing

    2018-01-01

    Aiming at the lack of a comprehensive evaluation of the distribution network, based on the existing distribution network evaluation index system, combined with the basic principles of constructing the evaluation index, put forward a new evaluation index system of distribution network capacity. This paper is mainly based on the total supply capability of the distribution network, combining single index and various factors, into a multi-evaluation index of the distribution network, thus forming a reasonable index system, and various indicators of rational quantification make the evaluation results more intuitive. In order to have a comprehensive judgment of distribution network, this paper uses weights to analyse the importance of each index, verify the rationality of the index system through the example, it is proved that the rationality of the index system, so as to guide the direction of distribution network planning.

  1. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Digital PCR for direct quantification of viruses without DNA extraction.

    Science.gov (United States)

    Pavšič, Jernej; Žel, Jana; Milavec, Mojca

    2016-01-01

    DNA extraction before amplification is considered an essential step for quantification of viral DNA using real-time PCR (qPCR). However, this can directly affect the final measurements due to variable DNA yields and removal of inhibitors, which leads to increased inter-laboratory variability of qPCR measurements and reduced agreement on viral loads. Digital PCR (dPCR) might be an advantageous methodology for the measurement of virus concentrations, as it does not depend on any calibration material and it has higher tolerance to inhibitors. DNA quantification without an extraction step (i.e. direct quantification) was performed here using dPCR and two different human cytomegalovirus whole-virus materials. Two dPCR platforms were used for this direct quantification of the viral DNA, and these were compared with quantification of the extracted viral DNA in terms of yield and variability. Direct quantification of both whole-virus materials present in simple matrices like cell lysate or Tris-HCl buffer provided repeatable measurements of virus concentrations that were probably in closer agreement with the actual viral load than when estimated through quantification of the extracted DNA. Direct dPCR quantification of other viruses, reference materials and clinically relevant matrices is now needed to show the full versatility of this very promising and cost-efficient development in virus quantification.

  3. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    Science.gov (United States)

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  4. Spinning reserve quantification by a stochastic–probabilistic scheme for smart power systems with high wind penetration

    International Nuclear Information System (INIS)

    Khazali, Amirhossein; Kalantar, Mohsen

    2015-01-01

    Highlights: • A stochastic–probabilistic approach is proposed for spinning reserve quantification. • A new linearized formulation integrating reliability metrics is presented. • The framework manages the reserve provided by responsive loads and storage systems. • The proposed method is capable of detaching the spinning reserve for different uses. - Abstract: This paper introduces a novel spinning reserve quantification scheme based on a hybrid stochastic–probabilistic approach for smart power systems including high penetration of wind generation. In this research the required spinning reserve is detached into two main parts. The first part of the reserve is procured to overcome imbalances between load and generation in the system. The second part of the required spinning reserve is scheduled according to the probability of unit outages. In order to overcome uncertainties caused by wind generation and load forecasting errors different scenarios of wind generation and load uncertainties are generated. For each scenario the reserve deployed by different components are taken account as the first part of the required reserve which is used to overcome imbalances. The second part of the required reserve is based on reliability constraints. The total expected energy not supplied (TEENS) is the reliability criterion which determines the second part of the required spinning reserve to overcome unit outage possibilities. This formulation permits the independent system operator to purchase the two different types of reserve with different prices. The introduced formulation for reserve quantification is also capable of managing and detaching the reserve provided by responsive loads and energy storage devices. The problem is formulated as a mixed integer linear programming (MILP) problem including linearized formulations for reliability metrics. Obtained results show the efficiency of the proposed approach compared with the conventional stochastic and deterministic

  5. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Directory of Open Access Journals (Sweden)

    Robert G Rutledge

    Full Text Available BACKGROUND: Linear regression of efficiency (LRE introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. FINDINGS: Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. CONCLUSIONS: The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  6. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  7. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    Energy Technology Data Exchange (ETDEWEB)

    Plechac, Petr [Univ. of Delaware, Newark, DE (United States); Vlachos, Dionisios G. [Univ. of Delaware, Newark, DE (United States)

    2018-01-23

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems, etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.

  8. Data-driven quantification of the robustness and sensitivity of cell signaling networks

    International Nuclear Information System (INIS)

    Mukherjee, Sayak; Seok, Sang-Cheol; Vieland, Veronica J; Das, Jayajit

    2013-01-01

    Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data-driven maximum entropy based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems. (paper)

  9. Calibration of a Sensor Array (an Electronic Tongue for Identification and Quantification of Odorants from Livestock Buildings

    Directory of Open Access Journals (Sweden)

    Jens Jørgen Lønsmann Iversen

    2007-01-01

    Full Text Available This contribution serves a dual purpose. The first purpose was to investigate the possibility of using a sensor array (an electronic tongue for on-line identification and quantification of key odorants representing a variety of chemical groups at two different acidities, pH 6 and 8. The second purpose was to simplify the electronic tongue by decreasing the number of electrodes from 14, which was the number of electrodes in the prototype. Different electrodes were used for identification and quantification of different key odorants. A total of eight electrodes were sufficient for identification and quantification in micromolar concentrations of the key odorants n-butyrate, ammonium and phenolate in test mixtures also containing iso-valerate, skatole and p-cresolate. The limited number of electrodes decreased the standard deviation and the relative standard deviation of triplicate measurements in comparison with the array comprising 14 electrodes. The electronic tongue was calibrated using 4 different test mixtures, each comprising 50 different combinations of key odorants in triplicates, a total of 600 measurements. Back propagation artificial neural network, partial least square and principal component analysis were used in the data analysis. The results indicate that the electronic tongue has a promising potential as an on- line sensor for odorants absorbed in the bioscrubber used in livestock buildings.

  10. A Network Traffic Control Enhancement Approach over Bluetooth Networks

    DEFF Research Database (Denmark)

    Son, L.T.; Schiøler, Henrik; Madsen, Ole Brun

    2003-01-01

    This paper analyzes network traffic control issues in Bluetooth data networks as convex optimization problem. We formulate the problem of maximizing of total network flows and minimizing the costs of flows. An adaptive distributed network traffic control scheme is proposed as an approximated solu...... as capacity limitations and flow requirements in the network. Simulation shows that the performance of Bluetooth networks could be improved by applying the adaptive distributed network traffic control scheme...... solution of the stated optimization problem that satisfies quality of service requirements and topologically induced constraints in Bluetooth networks, such as link capacity and node resource limitations. The proposed scheme is decentralized and complies with frequent changes of topology as well......This paper analyzes network traffic control issues in Bluetooth data networks as convex optimization problem. We formulate the problem of maximizing of total network flows and minimizing the costs of flows. An adaptive distributed network traffic control scheme is proposed as an approximated...

  11. On the complex quantification of risk: systems-based perspective on terrorism.

    Science.gov (United States)

    Haimes, Yacov Y

    2011-08-01

    This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.

  12. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  13. (1) H-MRS processing parameters affect metabolite quantification

    DEFF Research Database (Denmark)

    Bhogal, Alex A; Schür, Remmelt R; Houtepen, Lotte C

    2017-01-01

    investigated the influence of model parameters and spectral quantification software on fitted metabolite concentration values. Sixty spectra in 30 individuals (repeated measures) were acquired using a 7-T MRI scanner. Data were processed by four independent research groups with the freedom to choose their own...... + NAAG/Cr + PCr and Glu/Cr + PCr, respectively. Metabolite quantification using identical (1) H-MRS data was influenced by processing parameters, basis sets and software choice. Locally preferred processing choices affected metabolite quantification, even when using identical software. Our results......Proton magnetic resonance spectroscopy ((1) H-MRS) can be used to quantify in vivo metabolite levels, such as lactate, γ-aminobutyric acid (GABA) and glutamate (Glu). However, there are considerable analysis choices which can alter the accuracy or precision of (1) H-MRS metabolite quantification...

  14. Lowering the quantification limit of the QubitTM RNA HS assay using RNA spike-in.

    Science.gov (United States)

    Li, Xin; Ben-Dov, Iddo Z; Mauro, Maurizio; Williams, Zev

    2015-05-06

    RNA quantification is often a prerequisite for most RNA analyses such as RNA sequencing. However, the relatively low sensitivity and large sample consumption of traditional RNA quantification methods such as UV spectrophotometry and even the much more sensitive fluorescence-based RNA quantification assays, such as the Qubit™ RNA HS Assay, are often inadequate for measuring minute levels of RNA isolated from limited cell and tissue samples and biofluids. Thus, there is a pressing need for a more sensitive method to reliably and robustly detect trace levels of RNA without interference from DNA. To improve the quantification limit of the Qubit™ RNA HS Assay, we spiked-in a known quantity of RNA to achieve the minimum reading required by the assay. Samples containing trace amounts of RNA were then added to the spike-in and measured as a reading increase over RNA spike-in baseline. We determined the accuracy and precision of reading increases between 1 and 20 pg/μL as well as RNA-specificity in this range, and compared to those of RiboGreen(®), another sensitive fluorescence-based RNA quantification assay. We then applied Qubit™ Assay with RNA spike-in to quantify plasma RNA samples. RNA spike-in improved the quantification limit of the Qubit™ RNA HS Assay 5-fold, from 25 pg/μL down to 5 pg/μL while maintaining high specificity to RNA. This enabled quantification of RNA with original concentration as low as 55.6 pg/μL compared to 250 pg/μL for the standard assay and decreased sample consumption from 5 to 1 ng. Plasma RNA samples that were not measurable by the Qubit™ RNA HS Assay were measurable by our modified method. The Qubit™ RNA HS Assay with RNA spike-in is able to quantify RNA with high specificity at 5-fold lower concentration and uses 5-fold less sample quantity than the standard Qubit™ Assay.

  15. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  16. Direct quantification of creatinine in human urine by using isotope dilution extractive electrospray ionization tandem mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li Xue [Institute of Environmental Pollution and Health, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China); Fang Xiaowei [Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China); Yu Zhiqiang; Sheng Guoying [Guangdong Key Laboratory of Environmental Protection and Resource Utilization, State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Wu Minghong [Shanghai Applied Radiation Institute, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Fu Jiamo [Institute of Environmental Pollution and Health, School of Environmental and Chemical Engineering, Shanghai University, Shanghai 200444 (China); Guangdong Key Laboratory of Environmental Protection and Resource Utilization, State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China); Chen Huanwen, E-mail: chw8868@gmail.com [Jiangxi Key Laboratory for Mass Spectrometry and Instrumentation, Applied Chemistry Department, East China Institute of Technology, Nanchang 330013 (China)

    2012-10-20

    Highlights: Black-Right-Pointing-Pointer High throughput analysis of urinary creatinine is achieved by using ID-EESI-MS/MS. Black-Right-Pointing-Pointer Urine sample is directly analyzed and no sample pre-treatment is required. Black-Right-Pointing-Pointer Accurate quantification is accomplished with isotope dilution technique. - Abstract: Urinary creatinine (CRE) is an important biomarker of renal function. Fast and accurate quantification of CRE in human urine is required by clinical research. By using isotope dilution extractive electrospray ionization tandem mass spectrometry (EESI-MS/MS) a high throughput method for direct and accurate quantification of urinary CRE was developed in this study. Under optimized conditions, the method detection limit was lower than 50 {mu}g L{sup -1}. Over the concentration range investigated (0.05-10 mg L{sup -1}), the calibration curve was obtained with satisfactory linearity (R{sup 2} = 0.9861), and the relative standard deviation (RSD) values for CRE and isotope-labeled CRE (CRE-d3) were 7.1-11.8% (n = 6) and 4.1-11.3% (n = 6), respectively. The isotope dilution EESI-MS/MS method was validated by analyzing six human urine samples, and the results were comparable with the conventional spectrophotometric method (based on the Jaffe reaction). Recoveries for individual urine samples were 85-111% and less than 0.3 min was taken for each measurement, indicating that the present isotope dilution EESI-MS/MS method is a promising strategy for the fast and accurate quantification of urinary CRE in clinical laboratories.

  17. Reading and comparative quantification of perfusion myocardium tomo-scintigraphy realised by gamma camera and semiconductors camera

    International Nuclear Information System (INIS)

    Merlin, C.; Gauthe, M.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F.; Motreff, P.

    2010-01-01

    By offering high quality images, semiconductor cameras represent an undeniable technological progress. The interpretation of examinations, however, requires a learning phase. The optimization of quantification software should confirm the superiority of the D-SPECT for the measurement of kinetic parameters. (N.C.)

  18. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  19. Security Situation Assessment of All-Optical Network Based on Evidential Reasoning Rule

    Directory of Open Access Journals (Sweden)

    Zhong-Nan Zhao

    2016-01-01

    Full Text Available It is important to determine the security situations of the all-optical network (AON, which is more vulnerable to hacker attacks and faults than other networks in some cases. A new approach of the security situation assessment to the all-optical network is developed in this paper. In the new assessment approach, the evidential reasoning (ER rule is used to integrate various evidences of the security factors including the optical faults and the special attacks in the AON. Furthermore, a new quantification method of the security situation is also proposed. A case study of an all-optical network is conducted to demonstrate the effectiveness and the practicability of the new proposed approach.

  20. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  1. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement.

    Science.gov (United States)

    Ferri, Giovane Lopes; Chaves, Gisele de Lorena Diniz; Ribeiro, Glaydston Mattos

    2015-06-01

    This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the characteristic of social vulnerability, must be included in the system. In addition to the theoretical contribution to the reverse logistics network problem, this study aids in decision-making for public managers who have limited technical and administrative capacities for the management of solid wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Meeting the future metro network challenges and requirements by adopting programmable S-BVT with direct-detection and PDM functionality

    Science.gov (United States)

    Nadal, Laia; Svaluto Moreolo, Michela; Fàbrega, Josep M.; Vílchez, F. Javier

    2017-07-01

    In this paper, we propose an advanced programmable sliceable-bandwidth variable transceiver (S-BVT) with polarization division multiplexing (PDM) capability as a key enabler to fulfill the requirements for future 5G networks. Thanks to its cost-effective optoelectronic front-end based on orthogonal frequency division multiplexing (OFDM) technology and direct-detection (DD), the proposed S-BVT becomes suitable for next generation highly flexible and scalable metro networks. Polarization beam splitters (PBSs) and controllers (PCs), available on-demand, are included at the transceivers and at the network nodes, further enhancing the system flexibility and promoting an efficient use of the spectrum. 40G-100G PDM transmission has been experimentally demonstrated, within a 4-node photonic mesh network (ADRENALINE testbed), implementing a simplified equalization process.

  3. Quantification of groundwater infiltration and surface water inflows in urban sewer networks based on a multiple model approach.

    Science.gov (United States)

    Karpf, Christian; Krebs, Peter

    2011-05-01

    The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  5. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Watanabe, Shunzo; Ooyama, Hiroshi; Hojo, Kei; Tasaki, Hiroichi; Hanazono, Toshihide; Sato, Tokijiro; Metoki, Hirobumi; Totsuka, Motokichi; Oosumi, Noboru.

    1987-01-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  6. Cues, quantification, and agreement in language comprehension.

    Science.gov (United States)

    Tanner, Darren; Bulkes, Nyssa Z

    2015-12-01

    We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.

  7. Performance of the Real-Q EBV Quantification Kit for Epstein-Barr Virus DNA Quantification in Whole Blood.

    Science.gov (United States)

    Huh, Hee Jae; Park, Jong Eun; Kim, Ji Youn; Yun, Sun Ae; Lee, Myoung Keun; Lee, Nam Yong; Kim, Jong Won; Ki, Chang Seok

    2017-03-01

    There has been increasing interest in standardized and quantitative Epstein-Barr virus (EBV) DNA testing for the management of EBV disease. We evaluated the performance of the Real-Q EBV Quantification Kit (BioSewoom, Korea) in whole blood (WB). Nucleic acid extraction and real-time PCR were performed by using the MagNA Pure 96 (Roche Diagnostics, Germany) and 7500 Fast real-time PCR system (Applied Biosystems, USA), respectively. Assay sensitivity, linearity, and conversion factor were determined by using the World Health Organization international standard diluted in EBV-negative WB. We used 81 WB clinical specimens to compare performance of the Real-Q EBV Quantification Kit and artus EBV RG PCR Kit (Qiagen, Germany). The limit of detection (LOD) and limit of quantification (LOQ) for the Real-Q kit were 453 and 750 IU/mL, respectively. The conversion factor from EBV genomic copies to IU was 0.62. The linear range of the assay was from 750 to 10⁶ IU/mL. Viral load values measured with the Real-Q assay were on average 0.54 log₁₀ copies/mL higher than those measured with the artus assay. The Real-Q assay offered good analytical performance for EBV DNA quantification in WB.

  8. System resiliency quantification using non-state-space and state-space analytic models

    International Nuclear Information System (INIS)

    Ghosh, Rahul; Kim, DongSeong; Trivedi, Kishor S.

    2013-01-01

    Resiliency is becoming an important service attribute for large scale distributed systems and networks. Key problems in resiliency quantification are lack of consensus on the definition of resiliency and systematic approach to quantify system resiliency. In general, resiliency is defined as the ability of (system/person/organization) to recover/defy/resist from any shock, insult, or disturbance [1]. Many researchers interpret resiliency as a synonym for fault-tolerance and reliability/availability. However, effect of failure/repair on systems is already covered by reliability/availability measures and that of on individual jobs is well covered under the umbrella of performability [2] and task completion time analysis [3]. We use Laprie [4] and Simoncini [5]'s definition in which resiliency is the persistence of service delivery that can justifiably be trusted, when facing changes. The changes we are referring to here are beyond the envelope of system configurations already considered during system design, that is, beyond fault tolerance. In this paper, we outline a general approach for system resiliency quantification. Using examples of non-state-space and state-space stochastic models, we analytically–numerically quantify the resiliency of system performance, reliability, availability and performability measures w.r.t. structural and parametric changes

  9. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  10. Monitoring groundwater: optimising networks to take account of cost effectiveness, legal requirements and enforcement realities

    Science.gov (United States)

    Allan, A.; Spray, C.

    2013-12-01

    The quality of monitoring networks and modeling in environmental regulation is increasingly important. This is particularly true with respect to groundwater management, where data may be limited, physical processes poorly understood and timescales very long. The powers of regulators may be fatally undermined by poor or non-existent networks, primarily through mismatches between the legal standards that networks must meet, actual capacity and the evidentiary standards of courts. For example, in the second and third implementation reports on the Water Framework Directive, the European Commission drew attention to gaps in the standards of mandatory monitoring networks, where the standard did not meet the reality. In that context, groundwater monitoring networks should provide a reliable picture of groundwater levels and a ';coherent and comprehensive' overview of chemical status so that anthropogenically influenced long-term upward trends in pollutant levels can be tracked. Confidence in this overview should be such that 'the uncertainty from the monitoring process should not add significantly to the uncertainty of controlling the risk', with densities being sufficient to allow assessment of the impact of abstractions and discharges on levels in groundwater bodies at risk. The fact that the legal requirements for the quality of monitoring networks are set out in very vague terms highlights the many variables that can influence the design of monitoring networks. However, the quality of a monitoring network as part of the armory of environmental regulators is potentially of crucial importance. If, as part of enforcement proceedings, a regulator takes an offender to court and relies on conclusions derived from monitoring networks, a defendant may be entitled to question those conclusions. If the credibility, reliability or relevance of a monitoring network can be undermined, because it is too sparse, for example, this could have dramatic consequences on the ability of a

  11. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome

    Directory of Open Access Journals (Sweden)

    Dewey Colin N

    2011-08-01

    Full Text Available Abstract Background RNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments. Results We present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene. Conclusions RSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost

  12. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  13. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  14. Passenger flow analysis of Beijing urban rail transit network using fractal approach

    Science.gov (United States)

    Li, Xiaohong; Chen, Peiwen; Chen, Feng; Wang, Zijia

    2018-04-01

    To quantify the spatiotemporal distribution of passenger flow and the characteristics of an urban rail transit network, we introduce four radius fractal dimensions and two branch fractal dimensions by combining a fractal approach with passenger flow assignment model. These fractal dimensions can numerically describe the complexity of passenger flow in the urban rail transit network and its change characteristics. Based on it, we establish a fractal quantification method to measure the fractal characteristics of passenger follow in the rail transit network. Finally, we validate the reasonability of our proposed method by using the actual data of Beijing subway network. It has been shown that our proposed method can effectively measure the scale-free range of the urban rail transit network, network development and the fractal characteristics of time-varying passenger flow, which further provides a reference for network planning and analysis of passenger flow.

  15. Quantification In Neurology

    Directory of Open Access Journals (Sweden)

    Netravati M

    2005-01-01

    Full Text Available There is a distinct shift of emphasis in clinical neurology in the last few decades. A few years ago, it was just sufficient for a clinician to precisely record history, document signs, establish diagnosis and write prescription. In the present context, there has been a significant intrusion of scientific culture in clinical practice. Several criteria have been proposed, refined and redefined to ascertain accurate diagnosis for many neurological disorders. Introduction of the concept of impairment, disability, handicap and quality of life has added new dimension to the measurement of health and disease and neurological disorders are no exception. "Best guess" treatment modalities are no more accepted and evidence based medicine has become an integral component of medical care. Traditional treatments need validation and new therapies require vigorous trials. Thus, proper quantification in neurology has become essential, both in practice and research methodology in neurology. While this aspect is widely acknowledged, there is a limited access to a comprehensive document pertaining to measurements in neurology. This following description is a critical appraisal of various measurements and also provides certain commonly used rating scales/scores in neurological practice.

  16. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    International Nuclear Information System (INIS)

    Olander, Lydia P; Wollenberg, Eva; Tubiello, Francesco N; Herold, Martin

    2014-01-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term. (paper)

  17. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    Agricultural Research Service 2011), which aim to improve consistency of field measurement and data collection for soil carbon sequestration and soil nitrous oxide fluxes. Often these national-level activity data and emissions factors are the basis for regional and smaller-scale applications. Such data are used for model-based estimates of changes in GHGs at a project or regional level (Olander et al 2011). To complement national data for regional-, landscape-, or field-level applications, new data are often collected through farmer knowledge or records and field sampling. Ideally such data could be collected in a standardized manner, perhaps through some type of crowd sourcing model to improve regional—and national—level data, as well as to improve consistency of locally collected data. Data can also be collected by companies working with agricultural suppliers and in country networks, within efforts aimed at understanding firm and product (supply-chain) sustainability and risks (FAO 2009). Such data may feed into various certification processes or reporting requirements from buyers. Unfortunately, this data is likely proprietary. A new process is needed to aggregate and share private data in a way that would not be a competitive concern so such data could complement or supplement national data and add value. A number of papers in this focus issue discuss issues surrounding quantification methods and systems at large scales, global and national levels, while others explore landscape- and field-scale approaches. A few explore the intersection of top-down and bottom-up data measurement and modeling approaches. 5. The agricultural greenhouse gas quantification project and ERL focus issue Important land management decisions are often made with poor or few data, especially in developing countries. Current systems for quantifying GHG emissions are inadequate in most low-income countries, due to a lack of funding, human resources, and infrastructure. Most non-Annex 1 countries

  18. Tentacle: distributed quantification of genes in metagenomes.

    Science.gov (United States)

    Boulund, Fredrik; Sjögren, Anders; Kristiansson, Erik

    2015-01-01

    In metagenomics, microbial communities are sequenced at increasingly high resolution, generating datasets with billions of DNA fragments. Novel methods that can efficiently process the growing volumes of sequence data are necessary for the accurate analysis and interpretation of existing and upcoming metagenomes. Here we present Tentacle, which is a novel framework that uses distributed computational resources for gene quantification in metagenomes. Tentacle is implemented using a dynamic master-worker approach in which DNA fragments are streamed via a network and processed in parallel on worker nodes. Tentacle is modular, extensible, and comes with support for six commonly used sequence aligners. It is easy to adapt Tentacle to different applications in metagenomics and easy to integrate into existing workflows. Evaluations show that Tentacle scales very well with increasing computing resources. We illustrate the versatility of Tentacle on three different use cases. Tentacle is written for Linux in Python 2.7 and is published as open source under the GNU General Public License (v3). Documentation, tutorials, installation instructions, and the source code are freely available online at: http://bioinformatics.math.chalmers.se/tentacle.

  19. Data center networks and network architecture

    Science.gov (United States)

    Esaki, Hiroshi

    2014-02-01

    This paper discusses and proposes the architectural framework, which is for data center networks. The data center networks require new technical challenges, and it would be good opportunity to change the functions, which are not need in current and future networks. Based on the observation and consideration on data center networks, this paper proposes; (i) Broadcast-free layer 2 network (i.e., emulation of broadcast at the end-node), (ii) Full-mesh point-to-point pipes, and (iii) IRIDES (Invitation Routing aDvertisement for path Engineering System).

  20. Chip-Oriented Fluorimeter Design and Detection System Development for DNA Quantification in Nano-Liter Volumes

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2009-12-01

    Full Text Available The chip-based polymerase chain reaction (PCR system has been developed in recent years to achieve DNA quantification. Using a microstructure and miniature chip, the volume consumption for a PCR can be reduced to a nano-liter. With high speed cycling and a low reaction volume, the time consumption of one PCR cycle performed on a chip can be reduced. However, most of the presented prototypes employ commercial fluorimeters which are not optimized for fluorescence detection of such a small quantity sample. This limits the performance of DNA quantification, especially low experiment reproducibility. This study discusses the concept of a chip-oriented fluorimeter design. Using the analytical model, the current study analyzes the sensitivity and dynamic range of the fluorimeter to fit the requirements for detecting fluorescence in nano-liter volumes. Through the optimized processes, a real-time PCR on a chip system with only one nano-liter volume test sample is as sensitive as the commercial real-time PCR machine using the sample with twenty micro-liter volumes. The signal to noise (S/N ratio of a chip system for DNA quantification with hepatitis B virus (HBV plasmid samples is 3 dB higher. DNA quantification by the miniature chip shows higher reproducibility compared to the commercial machine with respect to samples of initial concentrations from 103 to 105 copies per reaction.

  1. La quantification en Kabiye: une approche linguistique | Pali ...

    African Journals Online (AJOL)

    ... which is denoted by lexical quantifiers. Quantification with specific reference is provided by different types of linguistic units (nouns, numerals, adjectives, adverbs, ideophones and verbs) in arguments/noun phrases and in the predicative phrase in the sense of Chomsky. Keywords: quantification, class, number, reference, ...

  2. Real-time polymerase chain reaction-based approach for quantification of the pat gene in the T25 Zea mays event.

    Science.gov (United States)

    Weighardt, Florian; Barbati, Cristina; Paoletti, Claudia; Querci, Maddalena; Kay, Simon; De Beuckeleer, Marc; Van den Eede, Guy

    2004-01-01

    In Europe, a growing interest for reliable techniques for the quantification of genetically modified component(s) of food matrixes is arising from the need to comply with the European legislative framework on novel food products. Real-time polymerase chain reaction (PCR) is currently the most powerful technique for the quantification of specific nucleic acid sequences. Several real-time PCR methodologies based on different molecular principles have been developed for this purpose. The most frequently used approach in the field of genetically modified organism (GMO) quantification in food or feed samples is based on the 5'-3'-exonuclease activity of Taq DNA polymerase on specific degradation probes (TaqMan principle). A novel approach was developed for the establishment of a TaqMan quantification system assessing GMO contents around the 1% threshold stipulated under European Union (EU) legislation for the labeling of food products. The Zea mays T25 elite event was chosen as a model for the development of the novel GMO quantification approach. The most innovative aspect of the system is represented by the use of sequences cloned in plasmids as reference standards. In the field of GMO quantification, plasmids are an easy to use, cheap, and reliable alternative to Certified Reference Materials (CRMs), which are only available for a few of the GMOs authorized in Europe, have a relatively high production cost, and require further processing to be suitable for analysis. Strengths and weaknesses of the use of novel plasmid-based standards are addressed in detail. In addition, the quantification system was designed to avoid the use of a reference gene (e.g., a single copy, species-specific gene) as normalizer, i.e., to perform a GMO quantification based on an absolute instead of a relative measurement. In fact, experimental evidences show that the use of reference genes adds variability to the measurement system because a second independent real-time PCR-based measurement

  3. Automated quantification of epicardial adipose tissue using CT angiography: evaluation of a prototype software

    Energy Technology Data Exchange (ETDEWEB)

    Spearman, James V.; Silverman, Justin R.; Krazinski, Aleksander W.; Costello, Philip [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Meinel, Felix G.; Geyer, Lucas L. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Ludwig-Maximilians-University Hospital, Institute for Clinical Radiology, Munich (Germany); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Apfaltrer, Paul [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University Medical Center Mannheim, Medical Faculty Mannheim - Heidelberg University, Institute of Clinical Radiology and Nuclear Medicine, Mannheim (Germany); Canstein, Christian [Siemens Medical Solutions USA, Inc., Malvern, PA (United States); De Cecco, Carlo Nicola [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' - Polo Pontino, Department of Radiological Sciences, Oncology and Pathology, Latina (Italy)

    2014-02-15

    This study evaluated the performance of a novel automated software tool for epicardial fat volume (EFV) quantification compared to a standard manual technique at coronary CT angiography (cCTA). cCTA data sets of 70 patients (58.6 ± 12.9 years, 33 men) were retrospectively analysed using two different post-processing software applications. Observer 1 performed a manual single-plane pericardial border definition and EFV{sub M} segmentation (manual approach). Two observers used a software program with fully automated 3D pericardial border definition and EFV{sub A} calculation (automated approach). EFV and time required for measuring EFV (including software processing time and manual optimization time) for each method were recorded. Intraobserver and interobserver reliability was assessed on the prototype software measurements. T test, Spearman's rho, and Bland-Altman plots were used for statistical analysis. The final EFV{sub A} (with manual border optimization) was strongly correlated with the manual axial segmentation measurement (60.9 ± 33.2 mL vs. 65.8 ± 37.0 mL, rho = 0.970, P < 0.001). A mean of 3.9 ± 1.9 manual border edits were performed to optimize the automated process. The software prototype required significantly less time to perform the measurements (135.6 ± 24.6 s vs. 314.3 ± 76.3 s, P < 0.001) and showed high reliability (ICC > 0.9). Automated EFV{sub A} quantification is an accurate and time-saving method for quantification of EFV compared to established manual axial segmentation methods. (orig.)

  4. Spatially resolved quantification of agrochemicals on plant surfaces using energy dispersive X-ray microanalysis.

    Science.gov (United States)

    Hunsche, Mauricio; Noga, Georg

    2009-12-01

    In the present study the principle of energy dispersive X-ray microanalysis (EDX), i.e. the detection of elements based on their characteristic X-rays, was used to localise and quantify organic and inorganic pesticides on enzymatically isolated fruit cuticles. Pesticides could be discriminated from the plant surface because of their distinctive elemental composition. Findings confirm the close relation between net intensity (NI) and area covered by the active ingredient (AI area). Using wide and narrow concentration ranges of glyphosate and glufosinate, respectively, results showed that quantification of AI requires the selection of appropriate regression equations while considering NI, peak-to-background (P/B) ratio, and AI area. The use of selected internal standards (ISs) such as Ca(NO(3))(2) improved the accuracy of the quantification slightly but led to the formation of particular, non-typical microstructured deposits. The suitability of SEM-EDX as a general technique to quantify pesticides was evaluated additionally on 14 agrochemicals applied at diluted or regular concentration. Among the pesticides tested, spatial localisation and quantification of AI amount could be done for inorganic copper and sulfur as well for the organic agrochemicals glyphosate, glufosinate, bromoxynil and mancozeb. (c) 2009 Society of Chemical Industry.

  5. Automated quantification of epicardial adipose tissue using CT angiography: evaluation of a prototype software

    International Nuclear Information System (INIS)

    Spearman, James V.; Silverman, Justin R.; Krazinski, Aleksander W.; Costello, Philip; Meinel, Felix G.; Geyer, Lucas L.; Schoepf, U.J.; Apfaltrer, Paul; Canstein, Christian; De Cecco, Carlo Nicola

    2014-01-01

    This study evaluated the performance of a novel automated software tool for epicardial fat volume (EFV) quantification compared to a standard manual technique at coronary CT angiography (cCTA). cCTA data sets of 70 patients (58.6 ± 12.9 years, 33 men) were retrospectively analysed using two different post-processing software applications. Observer 1 performed a manual single-plane pericardial border definition and EFV M segmentation (manual approach). Two observers used a software program with fully automated 3D pericardial border definition and EFV A calculation (automated approach). EFV and time required for measuring EFV (including software processing time and manual optimization time) for each method were recorded. Intraobserver and interobserver reliability was assessed on the prototype software measurements. T test, Spearman's rho, and Bland-Altman plots were used for statistical analysis. The final EFV A (with manual border optimization) was strongly correlated with the manual axial segmentation measurement (60.9 ± 33.2 mL vs. 65.8 ± 37.0 mL, rho = 0.970, P 0.9). Automated EFV A quantification is an accurate and time-saving method for quantification of EFV compared to established manual axial segmentation methods. (orig.)

  6. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  7. Expedited quantification of mutant ribosomal RNA by binary deoxyribozyme (BiDz) sensors.

    Science.gov (United States)

    Gerasimova, Yulia V; Yakovchuk, Petro; Dedkova, Larisa M; Hecht, Sidney M; Kolpashchikov, Dmitry M

    2015-10-01

    Mutations in ribosomal RNA (rRNA) have traditionally been detected by the primer extension assay, which is a tedious and multistage procedure. Here, we describe a simple and straightforward fluorescence assay based on binary deoxyribozyme (BiDz) sensors. The assay uses two short DNA oligonucleotides that hybridize specifically to adjacent fragments of rRNA, one of which contains a mutation site. This hybridization results in the formation of a deoxyribozyme catalytic core that produces the fluorescent signal and amplifies it due to multiple rounds of catalytic action. This assay enables us to expedite semi-quantification of mutant rRNA content in cell cultures starting from whole cells, which provides information useful for optimization of culture preparation prior to ribosome isolation. The method requires less than a microliter of a standard Escherichia coli cell culture and decreases analysis time from several days (for primer extension assay) to 1.5 h with hands-on time of ∼10 min. It is sensitive to single-nucleotide mutations. The new assay simplifies the preliminary analysis of RNA samples and cells in molecular biology and cloning experiments and is promising in other applications where fast detection/quantification of specific RNA is required. © 2015 Gerasimova et al.; Published by Cold Spring Harbor Laboratory Press for the RNA Society.

  8. Quantification analysis of CT for aphasic patients

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Ooyama, H.; Hojo, K.; Tasaki, H.; Hanazono, T.; Sato, T.; Metoki, H.; Totsuka, M.; Oosumi, N.

    1987-02-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis).

  9. Rapid quantification and sex determination of forensic evidence materials.

    Science.gov (United States)

    Andréasson, Hanna; Allen, Marie

    2003-11-01

    DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.

  10. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Science.gov (United States)

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  11. NEW MODEL FOR QUANTIFICATION OF ICT DEPENDABLE ORGANIZATIONS RESILIENCE

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2011-03-01

    Full Text Available Business environment today demands high reliable organizations in every segment to be competitive on the global market. Beside that, ICT sector is becoming irreplaceable in many fields of business, from the communication to the complex systems for process control and production. To fulfill those requirements and to develop further, many organizations worldwide are implementing business paradigm called - organizations resilience. Although resilience is well known term in many science fields, it is not well studied due to its complex nature. This paper is dealing with developing the new model for assessment and quantification of ICT dependable organizations resilience.

  12. Network analysis for the visualization and analysis of qualitative data.

    Science.gov (United States)

    Pokorny, Jennifer J; Norman, Alex; Zanesco, Anthony P; Bauer-Wu, Susan; Sahdra, Baljinder K; Saron, Clifford D

    2018-03-01

    We present a novel manner in which to visualize the coding of qualitative data that enables representation and analysis of connections between codes using graph theory and network analysis. Network graphs are created from codes applied to a transcript or audio file using the code names and their chronological location. The resulting network is a representation of the coding data that characterizes the interrelations of codes. This approach enables quantification of qualitative codes using network analysis and facilitates examination of associations of network indices with other quantitative variables using common statistical procedures. Here, as a proof of concept, we applied this method to a set of interview transcripts that had been coded in 2 different ways and the resultant network graphs were examined. The creation of network graphs allows researchers an opportunity to view and share their qualitative data in an innovative way that may provide new insights and enhance transparency of the analytical process by which they reach their conclusions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Rapid Quantification of 25-Hydroxyvitamin D3 in Human Serum by Matrix-Assisted Laser Desorption/Ionization Mass Spectrometry

    Science.gov (United States)

    Qi, Yulin; Müller, Miriam; Stokes, Caroline S.; Volmer, Dietrich A.

    2018-04-01

    LC-MS/MS is widely utilized today for quantification of vitamin D in biological fluids. Mass spectrometric assays for vitamin D require very careful method optimization for precise and interference-free, accurate analyses however. Here, we explore chemical derivatization and matrix-assisted laser desorption/ionization (MALDI) as a rapid alternative for quantitative measurement of 25-hydroxyvitamin D3 in human serum, and compare it to results from LC-MS/MS. The method implemented an automated imaging step of each MALDI spot, to locate areas of high intensity, avoid sweet spot phenomena, and thus improve precision. There was no statistically significant difference in vitamin D quantification between the MALDI-MS/MS and LC-MS/MS: mean ± standard deviation for MALDI-MS—29.4 ± 10.3 ng/mL—versus LC-MS/MS—30.3 ± 11.2 ng/mL (P = 0.128)—for the sum of the 25-hydroxyvitamin D epimers. The MALDI-based assay avoided time-consuming chromatographic separation steps and was thus much faster than the LC-MS/MS assay. It also consumed less sample, required no organic solvents, and was readily automated. In this proof-of-concept study, MALDI-MS readily demonstrated its potential for mass spectrometric quantification of vitamin D compounds in biological fluids.

  14. Quantification of DNA in Neonatal Dried Blood Spots by Adenine Tandem Mass Spectrometry.

    Science.gov (United States)

    Durie, Danielle; Yeh, Ed; McIntosh, Nathan; Fisher, Lawrence; Bulman, Dennis E; Birnboim, H Chaim; Chakraborty, Pranesh; Al-Dirbashi, Osama Y

    2018-01-02

    Newborn screening programs have expanded to include molecular-based assays as first-tier tests and the success of these assays depends on the quality and yield of DNA extracted from neonatal dried blood spots (DBS). To meet high throughput and rapid turnaround time requirements, newborn screening laboratories adopted rapid DNA extraction methods that produce crude extracts. Quantification of DNA in neonatal DBS is not routinely performed due to technical challenges; however, this may enhance the performance of assays that are sensitive to amounts of input DNA. In this study, we developed a novel high throughput method to quantify total DNA in DBS. It is based on specific acid-catalyzed depurination of DNA followed by mass spectrometric quantification of adenine. The amount of adenine was used to calculate DNA quantity per 3.2 mm DBS. Reference intervals were established using archived, neonatal DBS (n = 501) and a median of 130.6 ng of DNA per DBS was obtained, which is in agreement with literature values. The intra- and interday variations were quantification were 12.5 and 37.8 nmol/L adenine, respectively. We demonstrated that DNA from neonatal DBS can be successfully quantified in high throughput settings using instruments currently deployed in NBS laboratories.

  15. Overlay networks toward information networking

    CERN Document Server

    Tarkoma, Sasu

    2010-01-01

    With their ability to solve problems in massive information distribution and processing, while keeping scaling costs low, overlay systems represent a rapidly growing area of R&D with important implications for the evolution of Internet architecture. Inspired by the author's articles on content based routing, Overlay Networks: Toward Information Networking provides a complete introduction to overlay networks. Examining what they are and what kind of structures they require, the text covers the key structures, protocols, and algorithms used in overlay networks. It reviews the current state of th

  16. Tomorrow's energy needs require intelligent networks

    International Nuclear Information System (INIS)

    Bitsch, R.

    1998-01-01

    With the European wide move towards increased competition and greater deregulation of the energy industry, has come a thrust for greater efficiency and understanding customer needs and external constraints such as the environment. This, in turn, has led to solutions which take advantage of the tremendous developments in information technology and on-line control systems which are described in this paper. Topics include intelligent networks, decentralised energy supplies and decentralised energy management. (UK)

  17. Nonlinearity and chaos in wireless network traffic

    International Nuclear Information System (INIS)

    Mukherjee, Somenath; Ray, Rajdeep; Samanta, Rajkumar; Khondekar, Mofazzal H.; Sanyal, Goutam

    2017-01-01

    The natural complexity of wireless mobile network traffic dynamics has been assessed in this article by tracing the presence of nonlinearity and chaos in the profile of daily peak hour call arrival and daily call drop of a sub-urban local mobile switching centre. The tools like Recurrence Plot and Recurrence Quantification Analysis (RQA) has been used to reveal the probable presence of non-stationarity, nonlinearity and chaosity in the network traffic. Information Entropy (IE) and 0–1 test have been employed to provide the quantitative support to the findings. Both the daily peak hour call arrival profile and the daily call drop profile exhibit non-stationarity, determinism and nonlinearity with the former one being more regular while the later one is chaotic.

  18. Real-time PCR for the quantification of fungi in planta.

    Science.gov (United States)

    Klosterman, Steven J

    2012-01-01

    Methods enabling quantification of fungi in planta can be useful for a variety of applications. In combination with information on plant disease severity, indirect quantification of fungi in planta offers an additional tool in the screening of plants that are resistant to fungal diseases. In this chapter, a method is described for the quantification of DNA from a fungus in plant leaves using real-time PCR (qPCR). Although the method described entails quantification of the fungus Verticillium dahliae in lettuce leaves, the methodology described would be useful for other pathosystems as well. The method utilizes primers that are specific for amplification of a β-tubulin sequence from V. dahliae and a lettuce actin gene sequence as a reference for normalization. This approach enabled quantification of V. dahliae in the amount of 2.5 fg/ng of lettuce leaf DNA at 21 days following plant inoculation.

  19. Evaluation of Network Failure induced IPTV degradation in Metro Networks

    DEFF Research Database (Denmark)

    Wessing, Henrik; Berger, Michael Stübert; Yu, Hao

    2009-01-01

    In this paper, we evaluate future network services and classify them according to their network requirements. IPTV is used as candidate service to evaluate the performance of Carrier Ethernet OAM update mechanisms and requirements. The latter is done through quality measurements using MDI...

  20. Segmentation and quantification of subcellular structures in fluorescence microscopy images using Squassh.

    Science.gov (United States)

    Rizk, Aurélien; Paul, Grégory; Incardona, Pietro; Bugarski, Milica; Mansouri, Maysam; Niemann, Axel; Ziegler, Urs; Berger, Philipp; Sbalzarini, Ivo F

    2014-03-01

    Detection and quantification of fluorescently labeled molecules in subcellular compartments is a key step in the analysis of many cell biological processes. Pixel-wise colocalization analyses, however, are not always suitable, because they do not provide object-specific information, and they are vulnerable to noise and background fluorescence. Here we present a versatile protocol for a method named 'Squassh' (segmentation and quantification of subcellular shapes), which is used for detecting, delineating and quantifying subcellular structures in fluorescence microscopy images. The workflow is implemented in freely available, user-friendly software. It works on both 2D and 3D images, accounts for the microscope optics and for uneven image background, computes cell masks and provides subpixel accuracy. The Squassh software enables both colocalization and shape analyses. The protocol can be applied in batch, on desktop computers or computer clusters, and it usually requires images, respectively. Basic computer-user skills and some experience with fluorescence microscopy are recommended to successfully use the protocol.

  1. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement

    International Nuclear Information System (INIS)

    Ferri, Giovane Lopes; Diniz Chaves, Gisele de Lorena; Ribeiro, Glaydston Mattos

    2015-01-01

    Highlights: • We propose a reverse logistics network for MSW involving waste pickers. • A generic facility location mathematical model was validated in a Brazilian city. • The results enable to predict the capacity for screening and storage centres (SSC). • We minimise the costs for transporting MSW with screening and storage centres. • The use of SSC can be a potential source of revenue and a better use of MSW. - Abstract: This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the

  2. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement

    Energy Technology Data Exchange (ETDEWEB)

    Ferri, Giovane Lopes, E-mail: giovane.ferri@aluno.ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Diniz Chaves, Gisele de Lorena, E-mail: gisele.chaves@ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Ribeiro, Glaydston Mattos, E-mail: glaydston@pet.coppe.ufrj.br [Transportation Engineering Programme, Federal University of Rio de Janeiro – UFRJ, Centro de Tecnologia, Bloco H, Sala 106, Cidade Universitária, Rio de Janeiro, 21949-900 (Brazil)

    2015-06-15

    Highlights: • We propose a reverse logistics network for MSW involving waste pickers. • A generic facility location mathematical model was validated in a Brazilian city. • The results enable to predict the capacity for screening and storage centres (SSC). • We minimise the costs for transporting MSW with screening and storage centres. • The use of SSC can be a potential source of revenue and a better use of MSW. - Abstract: This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the

  3. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    Directory of Open Access Journals (Sweden)

    Mohammad Abdur Razzaque

    2014-12-01

    Full Text Available Wireless body sensor networks (WBSNs for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS, in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network’s QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  4. Real-time PCR assays for hepatitis B virus DNA quantification may require two different targets.

    Science.gov (United States)

    Liu, Chao; Chang, Le; Jia, Tingting; Guo, Fei; Zhang, Lu; Ji, Huimin; Zhao, Junpeng; Wang, Lunan

    2017-05-12

    Quantification Hepatitis B virus (HBV) DNA plays a critical role in the management of chronic HBV infections. However, HBV is a DNA virus with high levels of genetic variation, and drug-resistant mutations have emerged with the use of antiviral drugs. If a mutation caused a sequence mismatched in the primer or probe of a commercial DNA quantification kit, this would lead to an underestimation of the viral load of the sample. The aim of this study was to determine whether commercial kits, which use only one pair of primers and a single probe, accurately quantify the HBV DNA levels and to develop an improved duplex real-time PCR assay. We developed a new duplex real-time PCR assay that used two pairs of primers and two probes based on the conserved S and C regions of the HBV genome. We performed HBV DNA quantitative detection of HBV samples and compared the results of our duplex real-time PCR assays with the COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. The target region of the discordant sample was amplified, sequenced, and validated using plasmid. The results of the duplex real-time PCR were in good accordance with the commercial COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. We showed that two samples from Chinese HBV infections underestimated viral loads when quantified by the Roche kit because of a mismatch between the viral sequence and the reverse primer of the Roche kit. The HBV DNA levels of six samples were undervalued by duplex real-time PCR assays of the C region because of mutations in the primer of C region. We developed a new duplex real-time PCR assay, and the results of this assay were similar to the results of commercial kits. The HBV DNA level could be undervalued when using the COBAS TaqMan HBV Test version 2 for Chinese HBV infections owing to a mismatch with the primer/probe. A duplex real-time PCR assay based on the S and C regions could solve this problem to some extent.

  5. A review of structural and functional brain networks: small world and atlas.

    Science.gov (United States)

    Yao, Zhijun; Hu, Bin; Xie, Yuanwei; Moore, Philip; Zheng, Jiaxiang

    2015-03-01

    Brain networks can be divided into two categories: structural and functional networks. Many studies of neuroscience have reported that the complex brain networks are characterized by small-world or scale-free properties. The identification of nodes is the key factor in studying the properties of networks on the macro-, micro- or mesoscale in both structural and functional networks. In the study of brain networks, nodes are always determined by atlases. Therefore, the selection of atlases is critical, and appropriate atlases are helpful to combine the analyses of structural and functional networks. Currently, some problems still exist in the establishment or usage of atlases, which are often caused by the segmentation or the parcellation of the brain. We suggest that quantification of brain networks might be affected by the selection of atlases to a large extent. In the process of building atlases, the influences of single subjects and groups should be balanced. In this article, we focused on the effects of atlases on the analysis of brain networks and the improved divisions based on the tractography or connectivity in the parcellation of atlases.

  6. Strategy study of quantification harmonization of SUV in PET/CT images

    International Nuclear Information System (INIS)

    Fischer, Andreia Caroline Fischer da Silveira

    2014-01-01

    In clinical practice, PET/CT images are often analyzed qualitatively by visual comparison of tumor lesions and normal tissues uptake; and semi-quantitatively by means of a parameter called SUV (Standardized Uptake Value). To ensure that longitudinal studies acquired on different scanners are interchangeable, and information of quantification is comparable, it is necessary to establish a strategy to harmonize the quantification of SUV. The aim of this study is to evaluate the strategy to harmonize the quantification of PET/CT images, performed with different scanner models and manufacturers. For this purpose, a survey of the technical characteristics of equipment and acquisition protocols of clinical images of different services of PET/CT in the state of Rio Grande do Sul was conducted. For each scanner, the accuracy of SUV quantification, and the Recovery Coefficient (RC) curves were determined, using the reconstruction parameters clinically relevant and available. From these data, harmonized performance specifications among the evaluated scanners were identified, as well as the algorithm that produces, for each one, the most accurate quantification. Finally, the most appropriate reconstruction parameters to harmonize the SUV quantification in each scanner, either regionally or internationally were identified. It was found that the RC values of the analyzed scanners proved to be overestimated by up to 38%, particularly for objects larger than 17mm. These results demonstrate the need for further optimization, through the reconstruction parameters modification, and even the change of the reconstruction algorithm used in each scanner. It was observed that there is a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies. Thus, the choice of reconstruction method should be tied to the purpose of the PET/CT study in question, since the same reconstruction algorithm is not adequate, in one scanner, for qualitative

  7. Brain networks: small-worlds, after all?

    International Nuclear Information System (INIS)

    Muller, Lyle; Destexhe, Alain; Rudolph-Lilith, Michelle

    2014-01-01

    Since its introduction, the ‘small-world’ effect has played a central role in network science, particularly in the analysis of the complex networks of the nervous system. From the cellular level to that of interconnected cortical regions, many analyses have revealed small-world properties in the networks of the brain. In this work, we revisit the quantification of small-worldness in neural graphs. We find that neural graphs fall into the ‘borderline’ regime of small-worldness, residing close to that of a random graph, especially when the degree sequence of the network is taken into account. We then apply recently introducted analytical expressions for clustering and distance measures, to study this borderline small-worldness regime. We derive theoretical bounds for the minimal and maximal small-worldness index for a given graph, and by semi-analytical means, study the small-worldness index itself. With this approach, we find that graphs with small-worldness equivalent to that observed in experimental data are dominated by their random component. These results provide the first thorough analysis suggesting that neural graphs may reside far away from the maximally small-world regime. (paper)

  8. Brain networks: small-worlds, after all?

    Energy Technology Data Exchange (ETDEWEB)

    Muller, Lyle; Destexhe, Alain; Rudolph-Lilith, Michelle [Unité de Neurosciences, Information et Complexité (UNIC), Centre National de la Recherche Scientifique (CNRS), 1 Avenue de la Terrasse, Gif-sur-Yvette (France)

    2014-10-01

    Since its introduction, the ‘small-world’ effect has played a central role in network science, particularly in the analysis of the complex networks of the nervous system. From the cellular level to that of interconnected cortical regions, many analyses have revealed small-world properties in the networks of the brain. In this work, we revisit the quantification of small-worldness in neural graphs. We find that neural graphs fall into the ‘borderline’ regime of small-worldness, residing close to that of a random graph, especially when the degree sequence of the network is taken into account. We then apply recently introducted analytical expressions for clustering and distance measures, to study this borderline small-worldness regime. We derive theoretical bounds for the minimal and maximal small-worldness index for a given graph, and by semi-analytical means, study the small-worldness index itself. With this approach, we find that graphs with small-worldness equivalent to that observed in experimental data are dominated by their random component. These results provide the first thorough analysis suggesting that neural graphs may reside far away from the maximally small-world regime. (paper)

  9. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  10. Verifying cell loss requirements in high-speed communication networks

    Directory of Open Access Journals (Sweden)

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  11. Energy efficiency in future wireless networks: cognitive radio standardization requirements

    CSIR Research Space (South Africa)

    Masonta, M

    2012-09-01

    Full Text Available Energy consumption of mobile and wireless networks and devices is significant, indirectly increasing greenhouse gas emissions and energy costs for operators. Cognitive radio (CR) solutions can save energy for such networks and devices; moreover...

  12. Planar imaging quantification using 3D attenuation correction data and Monte Carlo simulated buildup factors

    International Nuclear Information System (INIS)

    Miller, C.; Filipow, L.; Jackson, S.; Riauka, T.

    1996-01-01

    A new method to correct for attenuation and the buildup of scatter in planar imaging quantification is presented. The method is based on the combined use of 3D density information provided by computed tomography to correct for attenuation and the application of Monte Carlo simulated buildup factors to correct for buildup in the projection pixels. CT and nuclear medicine images were obtained for a purpose-built nonhomogeneous phantom that models the human anatomy in the thoracic and abdominal regions. The CT transverse slices of the phantom were converted to a set of consecutive density maps. An algorithm was developed that projects the 3D information contained in the set of density maps to create opposing pairs of accurate 2D correction maps that were subsequently applied to planar images acquired from a dual-head gamma camera. A comparison of results obtained by the new method and the geometric mean approach based on published techniques is presented for some of the source arrangements used. Excellent results were obtained for various source - phantom configurations used to evaluate the method. Activity quantification of a line source at most locations in the nonhomogeneous phantom produced errors of less than 2%. Additionally, knowledge of the actual source depth is not required for accurate activity quantification. Quantification of volume sources placed in foam, Perspex and aluminium produced errors of less than 7% for the abdominal and thoracic configurations of the phantom. (author)

  13. Renormalization group theory for percolation in time-varying networks.

    Science.gov (United States)

    Karschau, Jens; Zimmerling, Marco; Friedrich, Benjamin M

    2018-05-22

    Motivated by multi-hop communication in unreliable wireless networks, we present a percolation theory for time-varying networks. We develop a renormalization group theory for a prototypical network on a regular grid, where individual links switch stochastically between active and inactive states. The question whether a given source node can communicate with a destination node along paths of active links is equivalent to a percolation problem. Our theory maps the temporal existence of multi-hop paths on an effective two-state Markov process. We show analytically how this Markov process converges towards a memoryless Bernoulli process as the hop distance between source and destination node increases. Our work extends classical percolation theory to the dynamic case and elucidates temporal correlations of message losses. Quantification of temporal correlations has implications for the design of wireless communication and control protocols, e.g. in cyber-physical systems such as self-organized swarms of drones or smart traffic networks.

  14. Exploitation of immunofluorescence for the quantification and characterization of small numbers of Pasteuria endospores.

    Science.gov (United States)

    Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G

    2006-12-01

    The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.

  15. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  16. High performance liquid chromatography-charged aerosol detection applying an inverse gradient for quantification of rhamnolipid biosurfactants.

    Science.gov (United States)

    Behrens, Beate; Baune, Matthias; Jungkeit, Janek; Tiso, Till; Blank, Lars M; Hayen, Heiko

    2016-07-15

    A method using high performance liquid chromatography coupled to charged-aerosol detection (HPLC-CAD) was developed for the quantification of rhamnolipid biosurfactants. Qualitative sample composition was determined by liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS). The relative quantification of different derivatives of rhamnolipids including di-rhamnolipids, mono-rhamnolipids, and their precursors 3-(3-hydroxyalkanoyloxy)alkanoic acids (HAAs) differed for two compared LC-MS instruments and revealed instrument dependent responses. Our here reported HPLC-CAD method provides uniform response. An inverse gradient was applied for the absolute quantification of rhamnolipid congeners to account for the detector's dependency on the solvent composition. The CAD produces a uniform response not only for the analytes but also for structurally different (nonvolatile) compounds. It was demonstrated that n-dodecyl-β-d-maltoside or deoxycholic acid can be used as alternative standards. The method of HPLC-ultra violet (UV) detection after a derivatization of rhamnolipids and HAAs to their corresponding phenacyl esters confirmed the obtained results but required additional, laborious sample preparation steps. Sensitivity determined as limit of detection and limit of quantification for four mono-rhamnolipids was in the range of 0.3-1.0 and 1.2-2.0μg/mL, respectively, for HPLC-CAD and 0.4 and 1.5μg/mL, respectively, for HPLC-UV. Linearity for HPLC-CAD was at least 0.996 (R(2)) in the calibrated range of about 1-200μg/mL. Hence, the here presented HPLC-CAD method allows absolute quantification of rhamnolipids and derivatives. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Reverse transcriptase real-time PCR for detection and quantification of viable Campylobacter jejuni directly from poultry faecal samples

    DEFF Research Database (Denmark)

    Bui, Thanh Xuan; Wolff, Anders; Madsen, Mogens

    2012-01-01

    Campylobacter spp. is the most common cause of bacterial diarrhoea in humans worldwide. Therefore, rapid and reliable methods fordetection and quantification of this pathogen are required. In this study, we have developed a reverse transcription quantitative real-time PCR(RT-qPCR) for detection a...

  18. Modeling In-Network Aggregation in VANETs

    NARCIS (Netherlands)

    Dietzel, Stefan; Kargl, Frank; Heijenk, Geert; Schaub, Florian

    2011-01-01

    The multitude of applications envisioned for vehicular ad hoc networks requires efficient communication and dissemination mechanisms to prevent network congestion. In-network data aggregation promises to reduce bandwidth requirements and enable scalability in large vehicular networks. However, most

  19. Comparison of Suitability of the Most Common Ancient DNA Quantification Methods.

    Science.gov (United States)

    Brzobohatá, Kristýna; Drozdová, Eva; Smutný, Jiří; Zeman, Tomáš; Beňuš, Radoslav

    2017-04-01

    Ancient DNA (aDNA) extracted from historical bones is damaged and fragmented into short segments, present in low quantity, and usually copurified with microbial DNA. A wide range of DNA quantification methods are available. The aim of this study was to compare the five most common DNA quantification methods for aDNA. Quantification methods were tested on DNA extracted from skeletal material originating from an early medieval burial site. The tested methods included ultraviolet (UV) absorbance, real-time quantitative polymerase chain reaction (qPCR) based on SYBR ® green detection, real-time qPCR based on a forensic kit, quantification via fluorescent dyes bonded to DNA, and fragmentary analysis. Differences between groups were tested using a paired t-test. Methods that measure total DNA present in the sample (NanoDrop ™ UV spectrophotometer and Qubit ® fluorometer) showed the highest concentrations. Methods based on real-time qPCR underestimated the quantity of aDNA. The most accurate method of aDNA quantification was fragmentary analysis, which also allows DNA quantification of the desired length and is not affected by PCR inhibitors. Methods based on the quantification of the total amount of DNA in samples are unsuitable for ancient samples as they overestimate the amount of DNA presumably due to the presence of microbial DNA. Real-time qPCR methods give undervalued results due to DNA damage and the presence of PCR inhibitors. DNA quantification methods based on fragment analysis show not only the quantity of DNA but also fragment length.

  20. A spatially distributed isotope sampling network in a snow-dominated catchment for the quantification of snow meltwater

    Science.gov (United States)

    Rücker, Andrea; Boss, Stefan; Von Freyberg, Jana; Zappa, Massimiliano; Kirchner, James

    2017-04-01

    In mountainous catchments with seasonal snowpacks, river discharge in downstream valleys is largely sustained by snowmelt in spring and summer. Future climate warming will likely reduce snow volumes and lead to earlier and faster snowmelt in such catchments. This, in turn, may increase the risk of summer low flows and hydrological droughts. Improved runoff predictions are thus required in order to adapt water management to future climatic conditions and to assure the availability of fresh water throughout the year. However, a detailed understanding of the hydrological processes is crucial to obtain robust predictions of river streamflow. This in turn requires fingerprinting source areas of streamflow, tracing water flow pathways, and measuring timescales of catchment storage, using tracers such as stable water isotopes (18O, 2H). For this reason, we have established an isotope sampling network in the Alptal, a snowmelt-dominated catchment (46.4 km2) in Central-Switzerland, as part of the SREP-Drought project (Snow Resources and the Early Prediction of hydrological DROUGHT in mountainous streams). Precipitation and snow cores are analyzed for their isotopic signature at daily or weekly intervals. Three-week bulk samples of precipitation are also collected on a transect along the Alptal valley bottom, and along an elevational transect perpendicular to the Alptal valley axis. Streamwater samples are taken at the catchment outlet as well as in two small nested sub-catchments (automatic snow lysimeter system was developed, which also facilitates real-time monitoring of snowmelt events, system status and environmental conditions (air and soil temperature). Three lysimeter systems were installed within the catchment, in one forested site and two open field sites at different elevations, and have been operational since November 2016. We will present the isotope time series from our regular sampling network, as well as initial results from our snowmelt lysimeter sites. Our

  1. WE-G-17A-03: MRIgRT: Quantification of Organ Motion

    International Nuclear Information System (INIS)

    Stanescu, T; Tadic, T; Jaffray, D

    2014-01-01

    Purpose: To develop an MRI-based methodology and tools required for the quantification of organ motion on a dedicated MRI-guided radiotherapy system. A three-room facility, consisting of a TrueBeam 6X linac vault, a 1.5T MR suite and a brachytherapy interventional room, is currently under commissioning at our institution. The MR scanner can move and image in either room for diagnostic and treatment guidance purposes. Methods: A multi-imaging modality (MR, kV) phantom, featuring programmable 3D simple and complex motion trajectories, was used for the validation of several image sorting algorithms. The testing was performed on MRI (e.g. TrueFISP, TurboFLASH), 4D CT and 4D CBCT. The image sorting techniques were based on a) direct image pixel manipulation into columns or rows, b) single and aggregated pixel data tracking and c) using computer vision techniques for global pixel analysis. Subsequently, the motion phantom and sorting algorithms were utilized for commissioning of MR fast imaging techniques for 2D-cine and 4D data rendering. MR imaging protocols were optimized (e.g. readout gradient strength vs. SNR) to minimize the presence of susceptibility-induced distortions, which were reported through phantom experiments and numerical simulations. The system-related distortions were also quantified (dedicated field phantom) and treated as systematic shifts where relevant. Results: Image sorting algorithms were validated for specific MR-based applications such as quantification of organ motion, local data sampling, and 4D MRI for pre-RT delivery with accuracy better than the raw image pixel size (e.g. 1 mm). MR fast imaging sequences were commissioning and imaging strategies were developed to mitigate spatial artifacts with minimal penalty on the image spatial and temporal sampling. Workflows (e.g. liver) were optimized to include the new motion quantification tools for RT planning and daily patient setup verification. Conclusion: Comprehensive methods were developed

  2. Network topology analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Kalb, Jeffrey L.; Lee, David S.

    2008-01-01

    Emerging high-bandwidth, low-latency network technology has made network-based architectures both feasible and potentially desirable for use in satellite payload architectures. The selection of network topology is a critical component when developing these multi-node or multi-point architectures. This study examines network topologies and their effect on overall network performance. Numerous topologies were reviewed against a number of performance, reliability, and cost metrics. This document identifies a handful of good network topologies for satellite applications and the metrics used to justify them as such. Since often multiple topologies will meet the requirements of the satellite payload architecture under development, the choice of network topology is not easy, and in the end the choice of topology is influenced by both the design characteristics and requirements of the overall system and the experience of the developer.

  3. Telecommunication networks

    CERN Document Server

    Iannone, Eugenio

    2011-01-01

    Many argue that telecommunications network infrastructure is the most impressive and important technology ever developed. Analyzing the telecom market's constantly evolving trends, research directions, infrastructure, and vital needs, Telecommunication Networks responds with revolutionized engineering strategies to optimize network construction. Omnipresent in society, telecom networks integrate a wide range of technologies. These include quantum field theory for the study of optical amplifiers, software architectures for network control, abstract algebra required to design error correction co

  4. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  5. Technical requirements of a social networking platform for senior citizens.

    Science.gov (United States)

    Demski, Hans; Hildebrand, Claudia; López Bolós, José; Tiedge, Winfried; Wengel, Stefanie; O Broin, Daire; Palmer, Ross

    2012-01-01

    Feeling an integrative part of a social community adds to the quality of life. Elderly people who find it more difficult to actively join activities are often threatened by isolation. Social networking can enable communication and sharing activities makes it easier to set up and maintain contacts. This paper describes the development of a social networking platform and activities like gaming and exergaming all of which aim to facilitate social interaction. It reports on the particular challenges that need to be addressed when creating a social networking platform specially designed to meet the needs of the elderly.

  6. The Network of U.S. Mutual Fund Investments: Diversification, Similarity and Fragility throughout the Global Financial Crisis

    OpenAIRE

    Delpini, Danilo; Battiston, Stefano; Caldarelli, Guido; Riccaboni, Massimo

    2018-01-01

    Network theory proved recently to be useful in the quantification of many properties of financial systems. The analysis of the structure of investment portfolios is a major application since their eventual correlation and overlap impact the actual risk diversification by individual investors. We investigate the bipartite network of US mutual fund portfolios and their assets. We follow its evolution during the Global Financial Crisis and analyse the interplay between diversification, as unders...

  7. The Quantification Process for the PRiME-U34i

    International Nuclear Information System (INIS)

    Hwang, Mee-Jeong; Han, Sang-Hoon; Yang, Joon-Eon

    2006-01-01

    In this paper, we introduce the quantification process for the PRIME-U34i, which is the merged model of ETs (Event Trees) and FTs (Fault Trees) for the level 1 internal PSA of UCN 3 and 4. PRiME-U34i has one top event. Therefore, the quantification process is changed to a simplified method when compared to the past one. In the past, we used the text file called a user file to control the quantification process. However, this user file is so complicated that it is difficult for a non-expert to understand it. Moreover, in the past PSA, ET and FT were separated but in PRiMEU34i, ET and FT were merged together. Thus, the quantification process is different. This paper is composed of five sections. In section 2, we introduce the construction of the one top model. Section 3 shows the quantification process used in the PRiME-U34i. Section 4 describes the post processing. Last section is the conclusions

  8. A New Resilience Measure for Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Ruiying Li

    2017-01-01

    Full Text Available Currently, supply chain networks can span the whole world, and any disruption of these networks may cause economic losses, decreases in sales and unsustainable supplies. Resilience, the ability of the system to withstand disruption and return to a normal state quickly, has become a new challenge during the supply chain network design. This paper defines a new resilience measure as the ratio of the integral of the normalized system performance within its maximum allowable recovery time after the disruption to the integral of the performance in the normal state. Using the maximum allowable recovery time of the system as the time interval under consideration, this measure allows the resilience of different systems to be compared on the same relative scale, and be used under both scenarios that the system can or cannot restore in the given time. Two specific resilience measures, the resilience based on the amount of product delivered and the resilience based on the average delivery distance, are provided for supply chain networks. To estimate the resilience of a given supply chain network, a resilience simulation method is proposed based on the Monte Carlo method. A four-layered hierarchial mobile phone supply chain network is used to illustrate the resilience quantification process and show how network structure affects the resilience of supply chain networks.

  9. Characterization of 3D PET systems for accurate quantification of myocardial blood flow

    OpenAIRE

    Renaud, Jennifer M.; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Éric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C.; Turkington, Timothy G

    2016-01-01

    Three-dimensional (3D) mode imaging is the current standard for positron emission tomography-computed tomography (PET-CT) systems. Dynamic imaging for quantification of myocardial blood flow (MBF) with short-lived tracers, such as Rb-82- chloride (Rb-82), requires accuracy to be maintained over a wide range of isotope activities and scanner count-rates. We propose new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative...

  10. Issues connected with indirect cost quantification: a focus on the transportation system

    Science.gov (United States)

    Křivánková, Zuzana; Bíl, Michal; Kubeček, Jan; Vodák, Rostislav

    2017-04-01

    Transportation and communication networks in general are vital parts of modern society. The economy relies heavily on transportation system performance. A number of people commutes to work regularly. Stockpiles in many companies are being reduced as the just-in-time production process is able to supply resources via the transportation network on time. Natural hazards have the potential to disturb transportation systems. Earthquakes, flooding or landsliding are examples of high-energetic processes which are capable of causing direct losses (i.e. physical damage to the infrastructure). We have focused on quantification of the indirect cost of natural hazards which are not easy to estimate. Indirect losses can also emerge as a result of meteorological hazards with low energy which only seldom cause direct losses, e.g. glaze, snowfall. Whereas evidence of repair work and general direct costs usually exist or can be estimated, indirect costs are much more difficult to identify particularly when they are not covered by insurance agencies. Delimitations of alternative routes (detours) are the most frequent responses to blocked road links. Indirect costs can then be related to increased fuel consumption and additional operating costs. Detours usually result in prolonged travel times. Indirect costs quantification has to therefore cover the value of the time. The costs from the delay are a nonlinear function of travel time, however. The existence of an alternative transportation pattern may also result in an increased number of traffic crashes. This topic has not been studied in depth but an increase in traffic crashes has been reported when people suddenly changed their traffic modes, e.g. when air traffic was not possible. The lost user benefit from those trips that were cancelled or suppressed is also difficult to quantify. Several approaches, based on post-event questioner surveys, have been applied to communities and companies affected by transportation accessibility

  11. Effects of contact network structure on epidemic transmission trees: implications for data required to estimate network structure.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2018-01-30

    Understanding the dynamics of disease spread is key to developing effective interventions to control or prevent an epidemic. The structure of the network of contacts over which the disease spreads has been shown to have a strong influence on the outcome of the epidemic, but an open question remains as to whether it is possible to estimate contact network features from data collected in an epidemic. The approach taken in this paper is to examine the distributions of epidemic outcomes arising from epidemics on networks with particular structural features to assess whether that structure could be measured from epidemic data and what other constraints might be needed to make the problem identifiable. To this end, we vary the network size, mean degree, and transmissibility of the pathogen, as well as the network feature of interest: clustering, degree assortativity, or attribute-based preferential mixing. We record several standard measures of the size and spread of the epidemic, as well as measures that describe the shape of the transmission tree in order to ascertain whether there are detectable signals in the final data from the outbreak. The results suggest that there is potential to estimate contact network features from transmission trees or pure epidemic data, particularly for diseases with high transmissibility or for which the relevant contact network is of low mean degree. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. What is 5G? Emerging 5G Mobile Services and Network Requirements

    Directory of Open Access Journals (Sweden)

    Heejung Yu

    2017-10-01

    Full Text Available In this paper, emerging 5G mobile services are investigated and categorized from the perspective of not service providers, but end-users. The development of 5G mobile services is based on an intensive analysis of the global trends in mobile services. Additionally, several indispensable service requirements, essential for realizing service scenarios presented, are described. To illustrate the changes in societies and in daily life in the 5G era, five megatrends, including the explosion of mobile data traffic, the rapid increase in connected devices, everything on the cloud, hyper-realistic media for convergence services and knowledge as a service enabled by big-data analysis, are examined. Based on such trends, we classify the new 5G services into five categories in terms of the end-users’ experience as follows: immersive 5G services, intelligent 5G services, omnipresent 5G services, autonomous 5G services and public 5G services. Moreover, several 5G service scenarios in each service category are presented, and essential technical requirements for realizing the aforementioned 5G services are suggested, along with a competitiveness analysis on 5G services/devices/network industries and the current condition of 5G technologies.

  13. OpenFlow Switching Performance using Network Simulator - 3

    OpenAIRE

    Sriram Prashanth, Naguru

    2016-01-01

    Context. In the present network inventive world, there is a quick expansion of switches and protocols, which are used to cope up with the increase in customer requirement in the networking. With increasing demand for higher bandwidths and lower latency and to meet these requirements new network paths are introduced. To reduce network load in present switching network, development of new innovative switching is required. These required results can be achieved by Software Define Network or Trad...

  14. Quantification of abdominal aortic deformation after EVAR

    Science.gov (United States)

    Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir

    2009-02-01

    Quantification of abdominal aortic deformation is an important requirement for the evaluation of endovascular stenting procedures and the further refinement of stent graft design. During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and, the stent graft. This deformation can affect the flow characteristics and morphology of the aorta which have been shown to be elicitors for stent graft failures and be reason for reappearance of aneurysms. We present a method for quantifying the deformation of an aneurysmatic aorta imposed by an inserted stent graft device. The outline of the procedure includes initial rigid alignment of the two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. This is accomplished by preprocessing and remodeling of the pre- and postoperative aortic shapes before performing a non-rigid registration. We further narrow the resulting displacement fields to only include local non-rigid deformation and therefore, eliminate all remaining global rigid transformations. Finally, deformations for specified locations can be calculated from the resulting displacement fields. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results and evaluation of the usage of deformation quantification were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.

  15. Artifacts Quantification of Metal Implants in MRI

    Science.gov (United States)

    Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.

    2017-11-01

    The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.

  16. Network approach to patterns in stratocumulus clouds

    Science.gov (United States)

    Glassmeier, Franziska; Feingold, Graham

    2017-10-01

    Stratocumulus clouds (Sc) have a significant impact on the amount of sunlight reflected back to space, with important implications for Earth’s climate. Representing Sc and their radiative impact is one of the largest challenges for global climate models. Sc fields self-organize into cellular patterns and thus lend themselves to analysis and quantification in terms of natural cellular networks. Based on large-eddy simulations of Sc fields, we present a first analysis of the geometric structure and self-organization of Sc patterns from this network perspective. Our network analysis shows that the Sc pattern is scale-invariant as a consequence of entropy maximization that is known as Lewis’s Law (scaling parameter: 0.16) and is largely independent of the Sc regime (cloud-free vs. cloudy cell centers). Cells are, on average, hexagonal with a neighbor number variance of about 2, and larger cells tend to be surrounded by smaller cells, as described by an Aboav-Weaire parameter of 0.9. The network structure is neither completely random nor characteristic of natural convection. Instead, it emerges from Sc-specific versions of cell division and cell merging that are shaped by cell expansion. This is shown with a heuristic model of network dynamics that incorporates our physical understanding of cloud processes.

  17. Atomic force microscopy applied to the quantification of nano-precipitates in thermo-mechanically treated microalloyed steels

    Energy Technology Data Exchange (ETDEWEB)

    Renteria-Borja, Luciano [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Hurtado-Delgado, Eduardo, E-mail: hurtado@itmorelia.edu.mx [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Garnica-Gonzalez, Pedro [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Dominguez-Lopez, Ivan; Garcia-Garcia, Adrian Luis [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada-IPN Unidad Queretaro, Cerro Blanco No. 141, Colinas del Cimatario, 76090 Queretaro (Mexico)

    2012-07-15

    Quantification of nanometer-size precipitates in microalloyed steels has been traditionally performed using transmission electron microscopy (TEM), in spite of its complicated sample preparation procedures, prone to preparation errors and sample perturbation. In contrast to TEM procedures, atomic force microscopy (AFM) is performed on the as-prepared specimen, with sample preparation requirements similar to those for optical microscopy (OM), rendering three-dimensional representations of the sample surface with vertical resolution of a fraction of a nanometer. In AFM, contrast mechanisms are directly related to surface properties such as topography, adhesion, and stiffness, among others. Chemical etching was performed using 0.5% nital, at time intervals between 4 and 20 s, in 4 s steps, until reaching the desired surface finish. For the present application, an average surface-roughness peak-height below 200 nm was sought. Quantification results of nanometric precipitates were obtained from the statistical analysis of AFM images of the microstructure developed by microalloyed Nb and V-Mo steels. Topography and phase contrast AFM images were used for quantification. The results obtained using AFM are consistent with similar TEM reports. - Highlights: Black-Right-Pointing-Pointer We quantified nanometric precipitates in Nb and V-Mo microalloyed steels using AFM. Black-Right-Pointing-Pointer Microstructures of the thermo-mechanically treated microalloyed steels were used. Black-Right-Pointing-Pointer Topography and phase contrast AFM images were used for quantification. Black-Right-Pointing-Pointer AFM results are comparable with traditionally obtained TEM measurements.

  18. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  19. Building a transnational biosurveillance network using semantic web technologies: requirements, design, and preliminary evaluation.

    Science.gov (United States)

    Teodoro, Douglas; Pasche, Emilie; Gobeill, Julien; Emonet, Stéphane; Ruch, Patrick; Lovis, Christian

    2012-05-29

    Antimicrobial resistance has reached globally alarming levels and is becoming a major public health threat. Lack of efficacious antimicrobial resistance surveillance systems was identified as one of the causes of increasing resistance, due to the lag time between new resistances and alerts to care providers. Several initiatives to track drug resistance evolution have been developed. However, no effective real-time and source-independent antimicrobial resistance monitoring system is available publicly. To design and implement an architecture that can provide real-time and source-independent antimicrobial resistance monitoring to support transnational resistance surveillance. In particular, we investigated the use of a Semantic Web-based model to foster integration and interoperability of interinstitutional and cross-border microbiology laboratory databases. Following the agile software development methodology, we derived the main requirements needed for effective antimicrobial resistance monitoring, from which we proposed a decentralized monitoring architecture based on the Semantic Web stack. The architecture uses an ontology-driven approach to promote the integration of a network of sentinel hospitals or laboratories. Local databases are wrapped into semantic data repositories that automatically expose local computing-formalized laboratory information in the Web. A central source mediator, based on local reasoning, coordinates the access to the semantic end points. On the user side, a user-friendly Web interface provides access and graphical visualization to the integrated views. We designed and implemented the online Antimicrobial Resistance Trend Monitoring System (ARTEMIS) in a pilot network of seven European health care institutions sharing 70+ million triples of information about drug resistance and consumption. Evaluation of the computing performance of the mediator demonstrated that, on average, query response time was a few seconds (mean 4.3, SD 0.1 × 10

  20. A survey of system architecture requirements for health care-based wireless sensor networks.

    Science.gov (United States)

    Egbogah, Emeka E; Fapojuwo, Abraham O

    2011-01-01

    Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  1. A Survey of System Architecture Requirements for Health Care-Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2011-05-01

    Full Text Available Wireless Sensor Networks (WSNs have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera. However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  2. Direct quantification of fatty acids in wet microalgal and yeast biomass via a rapid in situ fatty acid methyl ester derivatization approach.

    Science.gov (United States)

    Dong, Tao; Yu, Liang; Gao, Difeng; Yu, Xiaochen; Miao, Chao; Zheng, Yubin; Lian, Jieni; Li, Tingting; Chen, Shulin

    2015-12-01

    Accurate determination of fatty acid contents is routinely required in microalgal and yeast biofuel studies. A method of rapid in situ fatty acid methyl ester (FAME) derivatization directly from wet fresh microalgal and yeast biomass was developed in this study. This method does not require prior solvent extraction or dehydration. FAMEs were prepared with a sequential alkaline hydrolysis (15 min at 85 °C) and acidic esterification (15 min at 85 °C) process. The resulting FAMEs were extracted into n-hexane and analyzed using gas chromatography. The effects of each processing parameter (temperature, reaction time, and water content) upon the lipids quantification in the alkaline hydrolysis step were evaluated with a full factorial design. This method could tolerate water content up to 20% (v/v) in total reaction volume, which equaled up to 1.2 mL of water in biomass slurry (with 0.05-25 mg of fatty acid). There were no significant differences in FAME quantification (p>0.05) between the standard AOAC 991.39 method and the proposed wet in situ FAME preparation method. This fatty acid quantification method is applicable to fresh wet biomass of a wide range of microalgae and yeast species.

  3. A RP-HPLC method for quantification of diclofenac sodium released from biological macromolecules.

    Science.gov (United States)

    Bhattacharya, Shiv Sankar; Banerjee, Subham; Ghosh, Ashoke Kumar; Chattopadhyay, Pronobesh; Verma, Anurag; Ghosh, Amitava

    2013-07-01

    Interpenetrating network (IPN) microbeads of sodium carboxymethyl locust bean gum (SCMLBG) and sodium carboxymethyl cellulose (SCMC) containing diclofenac sodium (DS), a nonsteroidal anti-inflammatory drug, were prepared by single water-in-water (w/w) emulsion gelation process using AlCl3 as cross-linking agent in a complete aqueous environment. Pharmacokinetic study of these IPN microbeads was then carried out by a simple and feasible high-performance liquid chromatographic method with UV detection which was developed and validated for the quantification of diclofenac sodium in rabbit plasma. The chromatographic separation was carried out in a Hypersil BDS, C18 column (250 mm × 4.6 mm; 5 m). The mobile phase was a mixture of acetonitrile and methanol (70:30, v/v) at a flow rate of 1.0 ml/min. The UV detection was set at 276 nm. The extraction recovery of diclofenac sodium in plasma of three quality control (QC) samples was ranged from 81.52% to 95.29%. The calibration curve was linear in the concentration range of 20-1000 ng/ml with the correlation coefficient (r(2)) above 0.9951. The method was specific and sensitive with the limit of quantification of 20 ng/ml. In stability tests, diclofenac sodium in rabbit plasma was stable during storage and assay procedure. Copyright © 2013. Published by Elsevier B.V.

  4. Requiring collaboration: Hippocampal-prefrontal networks needed in spatial working memory and ageing. A multivariate analysis approach.

    Science.gov (United States)

    Zancada-Menendez, C; Alvarez-Suarez, P; Sampedro-Piquero, P; Cuesta, M; Begega, A

    2017-04-01

    Ageing is characterized by a decline in the processes of retention and storage of spatial information. We have examined the behavioural performance of adult rats (3months old) and aged rats (18months old) in a spatial complex task (delayed match to sample). The spatial task was performed in the Morris water maze and consisted of three sessions per day over a period of three consecutive days. Each session consisted of two trials (one sample and retention) and inter-session intervals of 5min. Behavioural results showed that the spatial task was difficult for middle aged group. This worse execution could be associated with impairments of processing speed and spatial information retention. We examined the changes in the neuronal metabolic activity of different brain regions through cytochrome C oxidase histochemistry. Then, we performed MANOVA and Discriminant Function Analyses to determine the functional profile of the brain networks that are involved in the spatial learning of the adult and middle-aged groups. This multivariate analysis showed two principal functional networks that necessarily participate in this spatial learning. The first network was composed of the supramammillary nucleus, medial mammillary nucleus, CA3, and CA1. The second one included the anterior cingulate, prelimbic, and infralimbic areas of the prefrontal cortex, dentate gyrus, and amygdala complex (basolateral l and central subregions). There was a reduction in the hippocampal-supramammilar network in both learning groups, whilst there was an overactivation in the executive network, especially in the aged group. This response could be due to a higher requirement of the executive control in a complex spatial memory task in older animals. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Cutset Quantification Error Evaluation for Shin-Kori 1 and 2 PSA model

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2009-01-01

    Probabilistic safety assessments (PSA) for nuclear power plants (NPPs) are based on the minimal cut set (MCS) quantification method. In PSAs, the risk and importance measures are computed from a cutset equation mainly by using approximations. The conservatism of the approximations is also a source of quantification uncertainty. In this paper, exact MCS quantification methods which are based on the 'sum of disjoint products (SDP)' logic and Inclusion-exclusion formula are applied and the conservatism of the MCS quantification results in Shin-Kori 1 and 2 PSA is evaluated

  6. Dialing long distance : communications to northern operations like the MGP require sophisticated satellite networks for voice, data

    Energy Technology Data Exchange (ETDEWEB)

    Cook, D.

    2006-04-15

    Telecommunications will play a major role in the construction of the Mackenzie Gas Project due to the remoteness of its location and the volume of communication data required to support the number of people involved and the amount of construction activity. While suppliers for communications tools have not yet been identified, initial telecommunications plans call for the installation of communication equipment at all camps, major facility sites and construction locations. Equipment will be housed in self-contained, climate-controlled buildings called telecommunication service modules (TSMs), which will be connected to each other as well as to existing public communications networks. The infrastructure will support telephone and fax systems; Internet and electronic mail services; multiple channel very high frequency radios; air-to-ground communication at airstrips and helipads; ship-to-shore at barge landings; closed circuit television; satellite community antenna television; CBC radio broadcast; public address systems; security systems; and supervisory control and data acquisition (SCADA) systems. An Internet Protocol (IP) network with a voice telephone system will be implemented along with a geostationary orbit satellite network. Satellite servers and real-time data services will be used. Car kits that allow call and battery-operated self-contained telemetry devices designed to communicate via a satellite system have been commissioned for the project that are capable of providing cost-efficient and reliable asset tracking and fleet management in remote regions and assisting in deployment requirements. It was concluded that many of today's mega-projects are the driving factors behind new telecommunications solutions in remote areas. 1 fig.

  7. Quantification of protein thiols and dithiols in the picomolar range using sodium borohydride and 4,4'-dithiodipyridine

    DEFF Research Database (Denmark)

    Hansen, Rosa E; Østergaard, Henrik; Nørgaard, Per

    2007-01-01

    Experimental determination of the number of thiols in a protein requires methodology that combines high sensitivity and reproducibility with low intrinsic thiol oxidation disposition. In detection of disulfide bonds, it is also necessary to efficiently reduce disulfides and to quantify...... the liberated thiols. Ellman's reagent (5,5'-dithiobis-[2-nitrobenzoic acid], DTNB) is the most widely used reagent for quantification of protein thiols, whereas dithiothreitol (DTT) is commonly used for disulfide reduction. DTNB suffers from a relatively low sensitivity, whereas DTT reduction is inconvenient...... sodium borohydride and the thiol reagent 4,4'-dithiodipyridine (4-DPS). Because borohydride is efficiently destroyed by the addition of acid, the complete reduction and quantification can be performed conveniently in one tube without desalting steps. Furthermore, the use of reverse-phase high...

  8. Evolution of metabolic network organization

    Directory of Open Access Journals (Sweden)

    Bonchev Danail

    2010-05-01

    Full Text Available Abstract Background Comparison of metabolic networks across species is a key to understanding how evolutionary pressures shape these networks. By selecting taxa representative of different lineages or lifestyles and using a comprehensive set of descriptors of the structure and complexity of their metabolic networks, one can highlight both qualitative and quantitative differences in the metabolic organization of species subject to distinct evolutionary paths or environmental constraints. Results We used a novel representation of metabolic networks, termed network of interacting pathways or NIP, to focus on the modular, high-level organization of the metabolic capabilities of the cell. Using machine learning techniques we identified the most relevant aspects of cellular organization that change under evolutionary pressures. We considered the transitions from prokarya to eukarya (with a focus on the transitions among the archaea, bacteria and eukarya, from unicellular to multicellular eukarya, from free living to host-associated bacteria, from anaerobic to aerobic, as well as the acquisition of cell motility or growth in an environment of various levels of salinity or temperature. Intuitively, we expect organisms with more complex lifestyles to have more complex and robust metabolic networks. Here we demonstrate for the first time that such organisms are not only characterized by larger, denser networks of metabolic pathways but also have more efficiently organized cross communications, as revealed by subtle changes in network topology. These changes are unevenly distributed among metabolic pathways, with specific categories of pathways being promoted to more central locations as an answer to environmental constraints. Conclusions Combining methods from graph theory and machine learning, we have shown here that evolutionary pressures not only affects gene and protein sequences, but also specific details of the complex wiring of functional modules

  9. Network Traffic Features for Anomaly Detection in Specific Industrial Control System Network

    Directory of Open Access Journals (Sweden)

    Matti Mantere

    2013-09-01

    Full Text Available The deterministic and restricted nature of industrial control system networks sets them apart from more open networks, such as local area networks in office environments. This improves the usability of network security, monitoring approaches that would be less feasible in more open environments. One of such approaches is machine learning based anomaly detection. Without proper customization for the special requirements of the industrial control system network environment, many existing anomaly or misuse detection systems will perform sub-optimally. A machine learning based approach could reduce the amount of manual customization required for different industrial control system networks. In this paper we analyze a possible set of features to be used in a machine learning based anomaly detection system in the real world industrial control system network environment under investigation. The network under investigation is represented by architectural drawing and results derived from network trace analysis. The network trace is captured from a live running industrial process control network and includes both control data and the data flowing between the control network and the office network. We limit the investigation to the IP traffic in the traces.

  10. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  11. Pure hydroxyapatite phantoms for the calibration of in vivo X-ray fluorescence systems of bone lead and strontium quantification.

    Science.gov (United States)

    Da Silva, Eric; Kirkham, Brian; Heyd, Darrick V; Pejović-Milić, Ana

    2013-10-01

    Plaster of Paris [poP, CaSO4·(1)/(2) H2O] is the standard phantom material used for the calibration of in vivo X-ray fluorescence (IVXRF)-based systems of bone metal quantification (i.e bone strontium and lead). Calibration of IVXRF systems of bone metal quantification employs the use of a coherent normalization procedure which requires the application of a coherent correction factor (CCF) to the data, calculated as the ratio of the relativistic form factors of the phantom material and bone mineral. Various issues have been raised as to the suitability of poP for the calibration of IVXRF systems of bone metal quantification which include its chemical purity and its chemical difference from bone mineral (a calcium phosphate). This work describes the preparation of a chemically pure hydroxyapatite phantom material, of known composition and stoichiometry, proposed for the purpose of calibrating IVXRF systems of bone strontium and lead quantification as a replacement for poP. The issue with contamination by the analyte was resolved by preparing pure Ca(OH)2 by hydroxide precipitation, which was found to bring strontium and lead levels to bone mineral component of NIST SRM 1486 (bone meal), as determined by powder X-ray diffraction spectrometry.

  12. Empirical quantification of lacustrine groundwater discharge - different methods and their limitations

    Science.gov (United States)

    Meinikmann, K.; Nützmann, G.; Lewandowski, J.

    2015-03-01

    Groundwater discharge into lakes (lacustrine groundwater discharge, LGD) can be an important driver of lake eutrophication. Its quantification is difficult for several reasons, and thus often neglected in water and nutrient budgets of lakes. In the present case several methods were applied to determine the expansion of the subsurface catchment, to reveal areas of main LGD and to identify the variability of LGD intensity. Size and shape of the subsurface catchment served as a prerequisite in order to calculate long-term groundwater recharge and thus the overall amount of LGD. Isotopic composition of near-shore groundwater was investigated to validate the quality of catchment delineation in near-shore areas. Heat as a natural tracer for groundwater-surface water interactions was used to find spatial variations of LGD intensity. Via an analytical solution of the heat transport equation, LGD rates were calculated from temperature profiles of the lake bed. The method has some uncertainties, as can be found from the results of two measurement campaigns in different years. The present study reveals that a combination of several different methods is required for a reliable identification and quantification of LGD and groundwater-borne nutrient loads.

  13. Human DNA quantification and sample quality assessment: Developmental validation of the PowerQuant(®) system.

    Science.gov (United States)

    Ewing, Margaret M; Thompson, Jonelle M; McLaren, Robert S; Purpero, Vincent M; Thomas, Kelli J; Dobrowski, Patricia A; DeGroot, Gretchen A; Romsos, Erica L; Storts, Douglas R

    2016-07-01

    Quantification of the total amount of human DNA isolated from a forensic evidence item is crucial for DNA normalization prior to short tandem repeat (STR) DNA analysis and a federal quality assurance standard requirement. Previous commercial quantification methods determine the total human DNA and total human male DNA concentrations, but provide limited information about the condition of the DNA sample. The PowerQuant(®) System includes targets for quantification of total human and total human male DNA as well as targets for evaluating whether the human DNA is degraded and/or PCR inhibitors are present in the sample. A developmental validation of the PowerQuant(®) System was completed, following SWGDAM Validation Guidelines, to evaluate the assay's specificity, sensitivity, precision and accuracy, as well as the ability to detect degraded DNA or PCR inhibitors. In addition to the total human DNA and total human male DNA concentrations in a sample, data from the degradation target and internal PCR control (IPC) provide a forensic DNA analyst meaningful information about the quality of the isolated human DNA and the presence of PCR inhibitors in the sample that can be used to determine the most effective workflow and assist downstream interpretation. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  14. How Severely Is DNA Quantification Hampered by RNA Co-extraction?

    Science.gov (United States)

    Sanchez, Ignacio; Remm, Matthieu; Frasquilho, Sonia; Betsou, Fay; Mathieson, William

    2015-10-01

    The optional RNase digest that is part of many DNA extraction protocols is often omitted, either because RNase is not provided in the kit or because users do not want to risk contaminating their laboratory. Consequently, co-eluting RNA can become a "contaminant" of unknown magnitude in a DNA extraction. We extracted DNA from liver, lung, kidney, and heart tissues and established that 28-52% of the "DNA" as assessed by spectrophotometry is actually RNA (depending on tissue type). Including an RNase digest in the extraction protocol reduced 260:280 purity ratios. Co-eluting RNA drives an overestimation of DNA yield when quantification is carried out using OD 260 nm spectrophotometry, or becomes an unquantified contaminant when spectrofluorometry is used for DNA quantification. This situation is potentially incompatible with the best practice guidelines for biobanks issued by organizations such as the International Society for Biological and Environmental Repositories, which state that biospecimens should be accurately characterized in terms of their identity, purity, concentration, and integrity. Consequently, we conclude that an RNase digest must be included in DNA extractions if pure DNA is required. We also discuss the implications of unquantified RNA contamination in DNA samples in the context of laboratory accreditation schemes.

  15. Online updating and uncertainty quantification using nonstationary output-only measurement

    Science.gov (United States)

    Yuen, Ka-Veng; Kuok, Sin-Chi

    2016-01-01

    Extended Kalman filter (EKF) is widely adopted for state estimation and parametric identification of dynamical systems. In this algorithm, it is required to specify the covariance matrices of the process noise and measurement noise based on prior knowledge. However, improper assignment of these noise covariance matrices leads to unreliable estimation and misleading uncertainty estimation on the system state and model parameters. Furthermore, it may induce diverging estimation. To resolve these problems, we propose a Bayesian probabilistic algorithm for online estimation of the noise parameters which are used to characterize the noise covariance matrices. There are three major appealing features of the proposed approach. First, it resolves the divergence problem in the conventional usage of EKF due to improper choice of the noise covariance matrices. Second, the proposed approach ensures the reliability of the uncertainty quantification. Finally, since the noise parameters are allowed to be time-varying, nonstationary process noise and/or measurement noise are explicitly taken into account. Examples using stationary/nonstationary response of linear/nonlinear time-varying dynamical systems are presented to demonstrate the efficacy of the proposed approach. Furthermore, comparison with the conventional usage of EKF will be provided to reveal the necessity of the proposed approach for reliable model updating and uncertainty quantification.

  16. Stereological quantification of mast cells in human synovium

    DEFF Research Database (Denmark)

    Damsgaard, T E; Sørensen, Flemming Brandt; Herlin, T

    1999-01-01

    Mast cells participate in both the acute allergic reaction as well as in chronic inflammatory diseases. Earlier studies have revealed divergent results regarding the quantification of mast cells in the human synovium. The aim of the present study was therefore to quantify these cells in the human...... synovium, using stereological techniques. Different methods of staining and quantification have previously been used for mast cell quantification in human synovium. Stereological techniques provide precise and unbiased information on the number of cell profiles in two-dimensional tissue sections of......, in this case, human synovium. In 10 patients suffering from osteoarthritis a median of 3.6 mast cells/mm2 synovial membrane was found. The total number of cells (synoviocytes, fibroblasts, lymphocytes, leukocytes) present was 395.9 cells/mm2 (median). The mast cells constituted 0.8% of all the cell profiles...

  17. Pitfalls of DNA Quantification Using DNA-Binding Fluorescent Dyes and Suggested Solutions.

    Science.gov (United States)

    Nakayama, Yuki; Yamaguchi, Hiromi; Einaga, Naoki; Esumi, Mariko

    2016-01-01

    The Qubit fluorometer is a DNA quantification device based on the fluorescence intensity of fluorescent dye binding to double-stranded DNA (dsDNA). Qubit is generally considered useful for checking DNA quality before next-generation sequencing because it measures intact dsDNA. To examine the most accurate and suitable methods for quantifying DNA for quality assessment, we compared three quantification methods: NanoDrop, which measures UV absorbance; Qubit; and quantitative PCR (qPCR), which measures the abundance of a target gene. For the comparison, we used three types of DNA: 1) DNA extracted from fresh frozen liver tissues (Frozen-DNA); 2) DNA extracted from formalin-fixed, paraffin-embedded liver tissues comparable to those used for Frozen-DNA (FFPE-DNA); and 3) DNA extracted from the remaining fractions after RNA extraction with Trizol reagent (Trizol-DNA). These DNAs were serially diluted with distilled water and measured using three quantification methods. For Frozen-DNA, the Qubit values were not proportional to the dilution ratio, in contrast with the NanoDrop and qPCR values. This non-proportional decrease in Qubit values was dependent on a lower salt concentration, and over 1 mM NaCl in the DNA solution was required for the Qubit measurement. For FFPE-DNA, the Qubit values were proportional to the dilution ratio and were lower than the NanoDrop values. However, electrophoresis revealed that qPCR reflected the degree of DNA fragmentation more accurately than Qubit. Thus, qPCR is superior to Qubit for checking the quality of FFPE-DNA. For Trizol-DNA, the Qubit values were proportional to the dilution ratio and were consistently lower than the NanoDrop values, similar to FFPE-DNA. However, the qPCR values were higher than the NanoDrop values. Electrophoresis with SYBR Green I and single-stranded DNA (ssDNA) quantification demonstrated that Trizol-DNA consisted mostly of non-fragmented ssDNA. Therefore, Qubit is not always the most accurate method for

  18. Quantification of organ motion during chemoradiotherapy of rectal cancer using cone-beam computed tomography.

    LENUS (Irish Health Repository)

    Chong, Irene

    2011-11-15

    There has been no previously published data related to the quantification of rectal motion using cone-beam computed tomography (CBCT) during standard conformal long-course chemoradiotherapy. The purpose of the present study was to quantify the interfractional changes in rectal movement and dimensions and rectal and bladder volume using CBCT and to quantify the bony anatomy displacements to calculate the margins required to account for systematic (Σ) and random (σ) setup errors.

  19. vhv supply networks, problems of network structure

    Energy Technology Data Exchange (ETDEWEB)

    Raimbault, J

    1966-04-01

    The present and future power requirements of the Paris area and the structure of the existing networks are discussed. The various limitations that will have to be allowed for to lay down the structure of a regional transmission network leading in the power of the large national transmission network to within the Paris built up area are described. The theoretical solution that has been adopted, and the features of its final achievement, which is planned for about the year 2000, and the intermediate stages are given. The problem of the structure of the National Power Transmission network which is to supply the regional network was studied. To solve this problem, a 730 kV voltage network will have to be introduced.

  20. Assessment of SRS ambient air monitoring network

    Energy Technology Data Exchange (ETDEWEB)

    Abbott, K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Jannik, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-08-03

    Three methodologies have been used to assess the effectiveness of the existing ambient air monitoring system in place at the Savannah River Site in Aiken, SC. Effectiveness was measured using two metrics that have been utilized in previous quantification of air-monitoring network performance; frequency of detection (a measurement of how frequently a minimum number of samplers within the network detect an event), and network intensity (a measurement of how consistent each sampler within the network is at detecting events). In addition to determining the effectiveness of the current system, the objective of performing this assessment was to determine what, if any, changes could make the system more effective. Methodologies included 1) the Waite method of determining sampler distribution, 2) the CAP88- PC annual dose model, and 3) a puff/plume transport model used to predict air concentrations at sampler locations. Data collected from air samplers at SRS in 2015 compared with predicted data resulting from the methodologies determined that the frequency of detection for the current system is 79.2% with sampler efficiencies ranging from 5% to 45%, and a mean network intensity of 21.5%. One of the air monitoring stations had an efficiency of less than 10%, and detected releases during just one sampling period of the entire year, adding little to the overall network intensity. By moving or removing this sampler, the mean network intensity increased to about 23%. Further work in increasing the network intensity and simulating accident scenarios to further test the ambient air system at SRS is planned

  1. Joint optimization of collimator and reconstruction parameters in SPECT imaging for lesion quantification

    International Nuclear Information System (INIS)

    McQuaid, Sarah J; Southekal, Sudeepti; Kijewski, Marie Foley; Moore, Stephen C

    2011-01-01

    Obtaining the best possible task performance using reconstructed SPECT images requires optimization of both the collimator and reconstruction parameters. The goal of this study is to determine how to perform this optimization, namely whether the collimator parameters can be optimized solely from projection data, or whether reconstruction parameters should also be considered. In order to answer this question, and to determine the optimal collimation, a digital phantom representing a human torso with 16 mm diameter hot lesions (activity ratio 8:1) was generated and used to simulate clinical SPECT studies with parallel-hole collimation. Two approaches to optimizing the SPECT system were then compared in a lesion quantification task: sequential optimization, where collimation was optimized on projection data using the Cramer–Rao bound, and joint optimization, which simultaneously optimized collimator and reconstruction parameters. For every condition, quantification performance in reconstructed images was evaluated using the root-mean-squared-error of 400 estimates of lesion activity. Compared to the joint-optimization approach, the sequential-optimization approach favoured a poorer resolution collimator, which, under some conditions, resulted in sub-optimal estimation performance. This implies that inclusion of the reconstruction parameters in the optimization procedure is important in obtaining the best possible task performance; in this study, this was achieved with a collimator resolution similar to that of a general-purpose (LEGP) collimator. This collimator was found to outperform the more commonly used high-resolution (LEHR) collimator, in agreement with other task-based studies, using both quantification and detection tasks.

  2. A machine learning approach for efficient uncertainty quantification using multiscale methods

    Science.gov (United States)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  3. A Simple and Effective Isocratic HPLC Method for Fast Identification and Quantification of Surfactin

    International Nuclear Information System (INIS)

    Muhammad Qadri Effendy Mubarak; Abdul Rahman Hassan; Aidil Abdul Hamid; Sahaid Khalil; Mohd Hafez Mohd Isa

    2015-01-01

    The aim of this study was to establish a simple, accurate and reproducible method for the identification and quantification of surfactin using high-performance liquid chromatography (HPLC). Previously reported method of identification and quantification of surfactin were time consuming and requires a large quantity of mobile phase. The new method was achieved by application of Chromolith® high performance RP-18 (100 x 4.6 mm, 5 μm) as the stationary phase and optimization of mobile phase ratio and flow rate. Mobile phase consisted of acetonitrile (ACN) and 3.8 mM trifluroacetic acid (TFA) solution of 80:20 ratio at flow rate of 2.2 mL/ min was obtained as the optimal conditions. Total elution time of the obtained surfactin peaks was four times quicker than various methods previously reported in the literature. The method described here allowed for fine separation of surfactin in standard sample (98 % purity) and surfactin in fermentation broth. (author)

  4. Quantification of cellular uptake of DNA nanostructures by qPCR

    DEFF Research Database (Denmark)

    Okholm, Anders Hauge; Nielsen, Jesper Sejrup; Vinther, Mathias

    2014-01-01

    interactions and structural and functional features of the DNA delivery device must be thoroughly investigated. Here, we present a rapid and robust method for the precise quantification of the component materials of DNA origami structures capable of entering cells in vitro. The quantification is performed...

  5. Actin-myosin network is required for proper assembly of influenza virus particles

    Energy Technology Data Exchange (ETDEWEB)

    Kumakura, Michiko; Kawaguchi, Atsushi, E-mail: ats-kawaguchi@md.tsukuba.ac.jp; Nagata, Kyosuke, E-mail: knagata@md.tsukuba.ac.jp

    2015-02-15

    Actin filaments are known to play a central role in cellular dynamics. After polymerization of actin, various actin-crosslinking proteins including non-muscle myosin II facilitate the formation of spatially organized actin filament networks. The actin-myosin network is highly expanded beneath plasma membrane. The genome of influenza virus (vRNA) replicates in the cell nucleus. Then, newly synthesized vRNAs are nuclear-exported to the cytoplasm as ribonucleoprotein complexes (vRNPs), followed by transport to the beneath plasma membrane where virus particles assemble. Here, we found that, by inhibiting actin-myosin network formation, the virus titer tends to be reduced and HA viral spike protein is aggregated on the plasma membrane. These results indicate that the actin-myosin network plays an important role in the virus formation. - Highlights: • Actin-myosin network is important for the influenza virus production. • HA forms aggregations at the plasma membrane in the presence of blebbistatin. • M1 is recruited to the budding site through the actin-myosin network.

  6. Actin-myosin network is required for proper assembly of influenza virus particles

    International Nuclear Information System (INIS)

    Kumakura, Michiko; Kawaguchi, Atsushi; Nagata, Kyosuke

    2015-01-01

    Actin filaments are known to play a central role in cellular dynamics. After polymerization of actin, various actin-crosslinking proteins including non-muscle myosin II facilitate the formation of spatially organized actin filament networks. The actin-myosin network is highly expanded beneath plasma membrane. The genome of influenza virus (vRNA) replicates in the cell nucleus. Then, newly synthesized vRNAs are nuclear-exported to the cytoplasm as ribonucleoprotein complexes (vRNPs), followed by transport to the beneath plasma membrane where virus particles assemble. Here, we found that, by inhibiting actin-myosin network formation, the virus titer tends to be reduced and HA viral spike protein is aggregated on the plasma membrane. These results indicate that the actin-myosin network plays an important role in the virus formation. - Highlights: • Actin-myosin network is important for the influenza virus production. • HA forms aggregations at the plasma membrane in the presence of blebbistatin. • M1 is recruited to the budding site through the actin-myosin network

  7. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  8. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    Science.gov (United States)

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  9. Molecular quantification of genes encoding for green-fluorescent proteins

    DEFF Research Database (Denmark)

    Felske, A; Vandieken, V; Pauling, B V

    2003-01-01

    A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification...... PCR strategy is a highly specific and sensitive way to monitor recombinant DNA in environments like the efflux of a biotechnological plant....

  10. SU-D-218-05: Material Quantification in Spectral X-Ray Imaging: Optimization and Validation.

    Science.gov (United States)

    Nik, S J; Thing, R S; Watts, R; Meyer, J

    2012-06-01

    To develop and validate a multivariate statistical method to optimize scanning parameters for material quantification in spectral x-rayimaging. An optimization metric was constructed by extensively sampling the thickness space for the expected number of counts for m (two or three) materials. This resulted in an m-dimensional confidence region ofmaterial quantities, e.g. thicknesses. Minimization of the ellipsoidal confidence region leads to the optimization of energy bins. For the given spectrum, the minimum counts required for effective material separation can be determined by predicting the signal-to-noise ratio (SNR) of the quantification. A Monte Carlo (MC) simulation framework using BEAM was developed to validate the metric. Projection data of the m-materials was generated and material decomposition was performed for combinations of iodine, calcium and water by minimizing the z-score between the expected spectrum and binned measurements. The mean square error (MSE) and variance were calculated to measure the accuracy and precision of this approach, respectively. The minimum MSE corresponds to the optimal energy bins in the BEAM simulations. In the optimization metric, this is equivalent to the smallest confidence region. The SNR of the simulated images was also compared to the predictions from the metric. TheMSE was dominated by the variance for the given material combinations,which demonstrates accurate material quantifications. The BEAMsimulations revealed that the optimization of energy bins was accurate to within 1keV. The SNRs predicted by the optimization metric yielded satisfactory agreement but were expectedly higher for the BEAM simulations due to the inclusion of scattered radiation. The validation showed that the multivariate statistical method provides accurate material quantification, correct location of optimal energy bins and adequateprediction of image SNR. The BEAM code system is suitable for generating spectral x- ray imaging simulations.

  11. A highly sensitive method for quantification of iohexol

    DEFF Research Database (Denmark)

    Schulz, A.; Boeringer, F.; Swifka, J.

    2014-01-01

    -chromatography-electrospray-massspectrometry (LC-ESI-MS) approach using the multiple reaction monitoring mode for iohexol quantification. In order to test whether a significantly decreased amount of iohexol is sufficient for reliable quantification, a LC-ESI-MS approach was assessed. We analyzed the kinetics of iohexol in rats after application...... of different amounts of iohexol (15 mg to 150 1.tg per rat). Blood sampling was conducted at four time points, at 15, 30, 60, and 90 min, after iohexol injection. The analyte (iohexol) and the internal standard (iotha(amic acid) were separated from serum proteins using a centrifugal filtration device...... with a cut-off of 3 kDa. The chromatographic separation was achieved on an analytical Zorbax SB C18 column. The detection and quantification were performed on a high capacity trap mass spectrometer using positive ion ESI in the multiple reaction monitoring (MRM) mode. Furthermore, using real-time polymerase...

  12. Local area networking handbook

    OpenAIRE

    O'Hara, Patricia A.

    1990-01-01

    Approved for public release; distribution is unlimited. This thesis provides Navy shore based commands with sufficient information on local area networking to (1) decide if they need a LAN, (2) determine what their networking requirements are, and (3) select a LAN that satisfies their requirements. LAN topologies, transmission media, and medium access methods are described. In addition, the OSI reference model for computer networking and the IEEE 802 LAN standards are explained in detail. ...

  13. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    Science.gov (United States)

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility

  14. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  15. Network Characterization Service (NCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Guojun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, George [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Crowley, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2001-06-06

    Distributed applications require information to effectively utilize the network. Some of the information they require is the current and maximum bandwidth, current and minimum latency, bottlenecks, burst frequency, and congestion extent. This type of information allows applications to determine parameters like optimal TCP buffer size. In this paper, we present a cooperative information-gathering tool called the network characterization service (NCS). NCS runs in user space and is used to acquire network information. Its protocol is designed for scalable and distributed deployment, similar to DNS. Its algorithms provide efficient, speedy and accurate detection of bottlenecks, especially dynamic bottlenecks. On current and future networks, dynamic bottlenecks do and will affect network performance dramatically.

  16. Optical slotted circuit switched network: a bandwidth efficient alternative to wavelength-routed network

    Science.gov (United States)

    Li, Yan; Collier, Martin

    2007-11-01

    Wavelength-routed networks have received enormous attention due to the fact that they are relatively simple to implement and implicitly offer Quality of Service (QoS) guarantees. However, they suffer from a bandwidth inefficiency problem and require complex Routing and Wavelength Assignment (RWA). Most attempts to address the above issues exploit the joint use of WDM and TDM technologies. The resultant TDM-based wavelength-routed networks partition the wavelength bandwidth into fixed-length time slots organized as a fixed-length frame. Multiple connections can thus time-share a wavelength and the grooming of their traffic leads to better bandwidth utilization. The capability of switching in both wavelength and time domains in such networks also mitigates the RWA problem. However, TMD-based wavelength-routed networks work in synchronous mode and strict synchronization among all network nodes is required. Global synchronization for all-optical networks which operate at extremely high speed is technically challenging, and deploying an optical synchronizer for each wavelength involves considerable cost. An Optical Slotted Circuit Switching (OSCS) architecture is proposed in this paper. In an OSCS network, slotted circuits are created to better utilize the wavelength bandwidth than in classic wavelength-routed networks. The operation of the protocol is such as to avoid the need for global synchronization required by TDM-based wavelength-routed networks.

  17. Comparison of DNA Quantification Methods for Next Generation Sequencing.

    Science.gov (United States)

    Robin, Jérôme D; Ludlow, Andrew T; LaRanger, Ryan; Wright, Woodring E; Shay, Jerry W

    2016-04-06

    Next Generation Sequencing (NGS) is a powerful tool that depends on loading a precise amount of DNA onto a flowcell. NGS strategies have expanded our ability to investigate genomic phenomena by referencing mutations in cancer and diseases through large-scale genotyping, developing methods to map rare chromatin interactions (4C; 5C and Hi-C) and identifying chromatin features associated with regulatory elements (ChIP-seq, Bis-Seq, ChiA-PET). While many methods are available for DNA library quantification, there is no unambiguous gold standard. Most techniques use PCR to amplify DNA libraries to obtain sufficient quantities for optical density measurement. However, increased PCR cycles can distort the library's heterogeneity and prevent the detection of rare variants. In this analysis, we compared new digital PCR technologies (droplet digital PCR; ddPCR, ddPCR-Tail) with standard methods for the titration of NGS libraries. DdPCR-Tail is comparable to qPCR and fluorometry (QuBit) and allows sensitive quantification by analysis of barcode repartition after sequencing of multiplexed samples. This study provides a direct comparison between quantification methods throughout a complete sequencing experiment and provides the impetus to use ddPCR-based quantification for improvement of NGS quality.

  18. SPECT quantification of regional radionuclide distributions

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1986-01-01

    SPECT quantification of regional radionuclide activities within the human body is affected by several physical and instrumental factors including attenuation of photons within the patient, Compton scattered events, the system's finite spatial resolution and object size, finite number of detected events, partial volume effects, the radiopharmaceutical biokinetics, and patient and/or organ motion. Furthermore, other instrumentation factors such as calibration of the center-of-rotation, sampling, and detector nonuniformities will affect the SPECT measurement process. These factors are described, together with examples of compensation methods that are currently available for improving SPECT quantification. SPECT offers the potential to improve in vivo estimates of absorbed dose, provided the acquisition, reconstruction, and compensation procedures are adequately implemented and utilized. 53 references, 2 figures

  19. Global network structure of dominance hierarchy of ant workers.

    Science.gov (United States)

    Shimoji, Hiroyuki; Abe, Masato S; Tsuji, Kazuki; Masuda, Naoki

    2014-10-06

    Dominance hierarchy among animals is widespread in various species and believed to serve to regulate resource allocation within an animal group. Unlike small groups, however, detection and quantification of linear hierarchy in large groups of animals are a difficult task. Here, we analyse aggression-based dominance hierarchies formed by worker ants in Diacamma sp. as large directed networks. We show that the observed dominance networks are perfect or approximate directed acyclic graphs, which are consistent with perfect linear hierarchy. The observed networks are also sparse and random but significantly different from networks generated through thinning of the perfect linear tournament (i.e. all individuals are linearly ranked and dominance relationship exists between every pair of individuals). These results pertain to global structure of the networks, which contrasts with the previous studies inspecting frequencies of different types of triads. In addition, the distribution of the out-degree (i.e. number of workers that the focal worker attacks), not in-degree (i.e. number of workers that attack the focal worker), of each observed network is right-skewed. Those having excessively large out-degrees are located near the top, but not the top, of the hierarchy. We also discuss evolutionary implications of the discovered properties of dominance networks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  20. Standby battery requirements for telecommunications power

    Energy Technology Data Exchange (ETDEWEB)

    May, G.J. [The Focus Partnership, 126 Main Street, Swithland, Loughborough, Leics LE12 8TJ (United Kingdom)

    2006-08-25

    The requirements for standby power for telecommunications are changing as the network moves from conventional systems to Internet Protocol (IP) telephony. These new systems require higher power levels closer to the user but the level of availability and reliability cannot be compromised if the network is to provide service in the event of a failure of the public utility. Many parts of these new networks are ac rather than dc powered with UPS systems for back-up power. These generally have lower levels of reliability than dc systems and the network needs to be designed such that overall reliability is not reduced through appropriate levels of redundancy. Mobile networks have different power requirements. Where there is a high density of nodes, continuity of service can be reasonably assured with short autonomy times. Furthermore, there is generally no requirement that these networks are the provider of last resort and therefore, specifications for continuity of power are directed towards revenue protection and overall reliability targets. As a result of these changes, battery requirements for reserve power are evolving. Shorter autonomy times are specified for parts of the network although a large part will continue to need support for hours rather minutes. Operational temperatures are increasing and battery solutions that provide longer life in extreme conditions are becoming important. Different battery technologies will be discussed in the context of these requirements. Conventional large flooded lead/acid cells both with pasted and tubular plates are used in larger central office applications but the majority of requirements are met with valve-regulated lead/acid (VRLA) batteries. The different types of VRLA battery will be described and their suitability for various applications outlined. New developments in battery construction and battery materials have improved both performance and reliability in recent years. Alternative technologies are also being proposed

  1. Capacitive immunosensor for C-reactive protein quantification

    KAUST Repository

    Sapsanis, Christos

    2015-08-02

    We report an agglutination-based immunosensor for the quantification of C-reactive protein (CRP). The developed immunoassay sensor requires approximately 15 minutes of assay time per sample and provides a sensitivity of 0.5 mg/L. We have measured the capacitance of interdigitated electrodes (IDEs) and quantified the concentration of added analyte. The proposed method is a label free detection method and hence provides rapid measurement preferable in diagnostics. We have so far been able to quantify the concentration to as low as 0.5 mg/L and as high as 10 mg/L. By quantifying CRP in serum, we can assess whether patients are prone to cardiac diseases and monitor the risk associated with such diseases. The sensor is a simple low cost structure and it can be a promising device for rapid and sensitive detection of disease markers at the point-of-care stage.

  2. Autonomous vision networking: miniature wireless sensor networks with imaging technology

    Science.gov (United States)

    Messinger, Gioia; Goldberg, Giora

    2006-09-01

    The recent emergence of integrated PicoRadio technology, the rise of low power, low cost, System-On-Chip (SOC) CMOS imagers, coupled with the fast evolution of networking protocols and digital signal processing (DSP), created a unique opportunity to achieve the goal of deploying large-scale, low cost, intelligent, ultra-low power distributed wireless sensor networks for the visualization of the environment. Of all sensors, vision is the most desired, but its applications in distributed sensor networks have been elusive so far. Not any more. The practicality and viability of ultra-low power vision networking has been proven and its applications are countless, from security, and chemical analysis to industrial monitoring, asset tracking and visual recognition, vision networking represents a truly disruptive technology applicable to many industries. The presentation discusses some of the critical components and technologies necessary to make these networks and products affordable and ubiquitous - specifically PicoRadios, CMOS imagers, imaging DSP, networking and overall wireless sensor network (WSN) system concepts. The paradigm shift, from large, centralized and expensive sensor platforms, to small, low cost, distributed, sensor networks, is possible due to the emergence and convergence of a few innovative technologies. Avaak has developed a vision network that is aided by other sensors such as motion, acoustic and magnetic, and plans to deploy it for use in military and commercial applications. In comparison to other sensors, imagers produce large data files that require pre-processing and a certain level of compression before these are transmitted to a network server, in order to minimize the load on the network. Some of the most innovative chemical detectors currently in development are based on sensors that change color or pattern in the presence of the desired analytes. These changes are easily recorded and analyzed by a CMOS imager and an on-board DSP processor

  3. Security-Enhanced Autonomous Network Management

    Science.gov (United States)

    Zeng, Hui

    2015-01-01

    Ensuring reliable communication in next-generation space networks requires a novel network management system to support greater levels of autonomy and greater awareness of the environment and assets. Intelligent Automation, Inc., has developed a security-enhanced autonomous network management (SEANM) approach for space networks through cross-layer negotiation and network monitoring, analysis, and adaptation. The underlying technology is bundle-based delay/disruption-tolerant networking (DTN). The SEANM scheme allows a system to adaptively reconfigure its network elements based on awareness of network conditions, policies, and mission requirements. Although SEANM is generically applicable to any radio network, for validation purposes it has been prototyped and evaluated on two specific networks: a commercial off-the-shelf hardware test-bed using Institute of Electrical Engineers (IEEE) 802.11 Wi-Fi devices and a military hardware test-bed using AN/PRC-154 Rifleman Radio platforms. Testing has demonstrated that SEANM provides autonomous network management resulting in reliable communications in delay/disruptive-prone environments.

  4. Validation of methods for the detection and quantification of engineered nanoparticles in food

    DEFF Research Database (Denmark)

    Linsinger, T.P.J.; Chaudhry, Q.; Dehalu, V.

    2013-01-01

    the methods apply equally well to particles of different suppliers. In trueness testing, information whether the particle size distribution has changed during analysis is required. Results are largely expected to follow normal distributions due to the expected high number of particles. An approach...... approach for the validation of methods for detection and quantification of nanoparticles in food samples. It proposes validation of identity, selectivity, precision, working range, limit of detection and robustness, bearing in mind that each “result” must include information about the chemical identity...

  5. Rapid and Easy Protocol for Quantification of Next-Generation Sequencing Libraries.

    Science.gov (United States)

    Hawkins, Steve F C; Guest, Paul C

    2018-01-01

    The emergence of next-generation sequencing (NGS) over the last 10 years has increased the efficiency of DNA sequencing in terms of speed, ease, and price. However, the exact quantification of a NGS library is crucial in order to obtain good data on sequencing platforms developed by the current market leader Illumina. Different approaches for DNA quantification are available currently and the most commonly used are based on analysis of the physical properties of the DNA through spectrophotometric or fluorometric methods. Although these methods are technically simple, they do not allow exact quantification as can be achieved using a real-time quantitative PCR (qPCR) approach. A qPCR protocol for DNA quantification with applications in NGS library preparation studies is presented here. This can be applied in various fields of study such as medical disorders resulting from nutritional programming disturbances.

  6. Cloud-Centric and Logically Isolated Virtual Network Environment Based on Software-Defined Wide Area Network

    Directory of Open Access Journals (Sweden)

    Dongkyun Kim

    2017-12-01

    Full Text Available Recent development of distributed cloud environments requires advanced network infrastructure in order to facilitate network automation, virtualization, high performance data transfer, and secured access of end-to-end resources across regional boundaries. In order to meet these innovative cloud networking requirements, software-defined wide area network (SD-WAN is primarily demanded to converge distributed cloud resources (e.g., virtual machines (VMs in a programmable and intelligent manner over distant networks. Therefore, this paper proposes a logically isolated networking scheme designed to integrate distributed cloud resources to dynamic and on-demand virtual networking over SD-WAN. The performance evaluation and experimental results of the proposed scheme indicate that virtual network convergence time is minimized in two different network models such as: (1 an operating OpenFlow-oriented SD-WAN infrastructure (KREONET-S which is deployed on the advanced national research network in Korea, and (2 Mininet-based experimental and emulated networks.

  7. Two-Phase Microfluidic Systems for High Throughput Quantification of Agglutination Assays

    KAUST Repository

    Castro, David

    2018-01-01

    assay, with a minimum detection limit of 50 ng/mL using optical image analysis. We compare optical image analysis and light scattering as quantification methods, and demonstrate the first light scattering quantification of agglutination assays in a two

  8. Reconfigurable network processing platforms

    NARCIS (Netherlands)

    Kachris, C.

    2007-01-01

    This dissertation presents our investigation on how to efficiently exploit reconfigurable hardware to design flexible, high performance, and power efficient network devices capable to adapt to varying processing requirements of network applications and traffic. The proposed reconfigurable network

  9. Security for 5G Mobile Wireless Networks

    OpenAIRE

    Fang, Dongfeng; Qian, Yi; Qingyang Hu, Rose

    2017-01-01

    The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive survey on security of 5G wireless network systems compared to the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services with the consideration of new service requirements and new use ca...

  10. Dried blood spot assay for the quantification of phenytoin using Liquid Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Villanelli, Fabio; Giocaliere, Elisa; Malvagia, Sabrina; Rosati, Anna; Forni, Giulia; Funghini, Silvia; Shokry, Engy; Ombrone, Daniela; Della Bona, Maria Luisa; Guerrini, Renzo; la Marca, Giancarlo

    2015-02-02

    Phenytoin (PHT) is one of the most commonly used anticonvulsant drugs for the treatment of epilepsy and bipolar disorders. The large amount of plasma required by conventional methods for drug quantification makes mass spectrometry combined with dried blood spot (DBS) sampling crucial for pediatric patients where therapeutic drug monitoring or pharmacokinetic studies may be difficult to realize. DBS represents a new convenient sampling support requiring minimally invasive blood drawing and providing long-term stability of samples and less expensive shipment and storage. The aim of this study was to develop a LC-MS/MS method for the quantification of PHT on DBS. This analytical method was validated and gave good linearity (r(2)=0.999) in the range of 0-100mg/l. LOQ and LOD were 1.0mg/l and 0.3mg/l, respectively. The drug extraction from paper was performed in a few minutes using a mixture composed of organic solvent for 80%. The recovery ranged from 85 to 90%; PHT in DBS showed to be stable at different storage temperatures for one month. A good correlation was also obtained between PHT plasma and DBS concentrations. This method is both precise and accurate and appears to be particularly suitable to monitor treatment with a simple and convenient sample collection procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  12. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  13. Remote quantification of phycocyanin in potable water sources through an adaptive model

    Science.gov (United States)

    Song, Kaishan; Li, Lin; Tedesco, Lenore P.; Li, Shuai; Hall, Bob E.; Du, Jia

    2014-09-01

    Cyanobacterial blooms in water supply sources in both central Indiana USA (CIN) and South Australia (SA) are a cause of great concerns for toxin production and water quality deterioration. Remote sensing provides an effective approach for quick assessment of cyanobacteria through quantification of phycocyanin (PC) concentration. In total, 363 samples spanning a large variation of optically active constituents (OACs) in CIN and SA waters were collected during 24 field surveys. Concurrently, remote sensing reflectance spectra (Rrs) were measured. A partial least squares-artificial neural network (PLS-ANN) model, artificial neural network (ANN) and three-band model (TBM) were developed or tuned by relating the Rrs with PC concentration. Our results indicate that the PLS-ANN model outperformed the ANN and TBM with both the original spectra and simulated ESA/Sentinel-3/Ocean and Land Color Instrument (OLCI) and EO-1/Hyperion spectra. The PLS-ANN model resulted in a high coefficient of determination (R2) for CIN dataset (R2 = 0.92, R: 0.3-220.7 μg/L) and SA (R2 = 0.98, R: 0.2-13.2 μg/L). In comparison, the TBM model yielded an R2 = 0.77 and 0.94 for the CIN and SA datasets, respectively; while the ANN obtained an intermediate modeling accuracy (CIN: R2 = 0.86; SA: R2 = 0.95). Applying the simulated OLCI and Hyperion aggregated datasets, the PLS-ANN model still achieved good performance (OLCI: R2 = 0.84; Hyperion: R2 = 0.90); the TBM also presented acceptable performance for PC estimations (OLCI: R2 = 0.65, Hyperion: R2 = 0.70). Based on the results, the PLS-ANN is an effective modeling approach for the quantification of PC in productive water supplies based on its effectiveness in solving the non-linearity of PC with other OACs. Furthermore, our investigation indicates that the ratio of inorganic suspended matter (ISM) to PC concentration has close relationship to modeling relative errors (CIN: R2 = 0.81; SA: R2 = 0.92), indicating that ISM concentration exert

  14. Molecular quantification of environmental DNA using microfluidics and digital PCR.

    Science.gov (United States)

    Hoshino, Tatsuhiko; Inagaki, Fumio

    2012-09-01

    Real-time PCR has been widely used to evaluate gene abundance in natural microbial habitats. However, PCR-inhibitory substances often reduce the efficiency of PCR, leading to the underestimation of target gene copy numbers. Digital PCR using microfluidics is a new approach that allows absolute quantification of DNA molecules. In this study, digital PCR was applied to environmental samples, and the effect of PCR inhibitors on DNA quantification was tested. In the control experiment using λ DNA and humic acids, underestimation of λ DNA at 1/4400 of the theoretical value was observed with 6.58 ng μL(-1) humic acids. In contrast, digital PCR provided accurate quantification data with a concentration of humic acids up to 9.34 ng μL(-1). The inhibitory effect of paddy field soil extract on quantification of the archaeal 16S rRNA gene was also tested. By diluting the DNA extract, quantified copy numbers from real-time PCR and digital PCR became similar, indicating that dilution was a useful way to remedy PCR inhibition. The dilution strategy was, however, not applicable to all natural environmental samples. For example, when marine subsurface sediment samples were tested the copy number of archaeal 16S rRNA genes was 1.04×10(3) copies/g-sediment by digital PCR, whereas real-time PCR only resulted in 4.64×10(2) copies/g-sediment, which was most likely due to an inhibitory effect. The data from this study demonstrated that inhibitory substances had little effect on DNA quantification using microfluidics and digital PCR, and showed the great advantages of digital PCR in accurate quantifications of DNA extracted from various microbial habitats. Copyright © 2012 Elsevier GmbH. All rights reserved.

  15. Probabilistic modelling of security of supply in gas networks and evaluation of new infrastructure

    International Nuclear Information System (INIS)

    Praks, Pavel; Kopustinskas, Vytis; Masera, Marcelo

    2015-01-01

    The paper presents a probabilistic model to study security of supply in a gas network. The model is based on Monte-Carlo simulations with graph theory, and is implemented in the software tool ProGasNet. The software allows studying gas networks in various aspects including identification of weakest links and nodes, vulnerability analysis, bottleneck analysis, evaluation of new infrastructure etc. In this paper ProGasNet is applied to a benchmark network based on a real EU gas transmission network of several countries with the purpose of evaluating the security of supply effects of new infrastructure, either under construction, recently completed or under planning. The probabilistic model enables quantitative evaluations by comparing the reliability of gas supply in each consuming node of the network. - Highlights: • A Monte-Carlo algorithm for stochastic flow networks is presented. • Network elements can fail according to a given probabilistic model. • Priority supply pattern of gas transmission networks is assumed. • A real-world EU gas transmission network is presented and analyzed. • A risk ratio is used for security of supply quantification of a new infrastructure.

  16. [Self-owned versus accredited network: comparative cost analysis in a Brazilian health insurance provider].

    Science.gov (United States)

    Souza, Marcos Antônio de; Salvalaio, Dalva

    2010-10-01

    to analyze the cost of a self-owned network maintained by a Brazilian health insurance provider as compared to the price charged by accredited service providers, so as to identify whether or not the self-owned network is economically advantageous. for this exploratory study, the company's management reports were reviewed. The cost associated with the self-owned network was calculated based on medical and dental office visits and diagnostic/laboratory tests performed at one of the company's most representative facilities. The costs associated with third parties were derived from price tables used by the accredited network for the same services analyzed in the self-owned network. The full-cost method was used for cost quantification. Costs are presented as absolute values (in R$) and percent comparisons between self-owned network costs versus accredited network costs. overall, the self-owned network was advantageous for medical and dental consultations as well as diagnostic and laboratory tests. Pediatric and labor medicine consultations and x-rays were less costly in the accredited network. the choice of verticalization has economic advantages for the health care insurance operator in comparison with services provided by third parties.

  17. Myoblots: dystrophin quantification by in-cell western assay for a streamlined development of Duchenne muscular dystrophy (DMD) treatments.

    Science.gov (United States)

    Ruiz-Del-Yerro, E; Garcia-Jimenez, I; Mamchaoui, K; Arechavala-Gomeza, V

    2017-10-31

    New therapies for neuromuscular disorders are often mutation specific and require to be studied in patient's cell cultures. In Duchenne muscular dystrophy (DMD) dystrophin restoration drugs are being developed but as muscle cell cultures from DMD patients are scarce and do not grow or differentiate well, only a limited number of candidate drugs are tested. Moreover, dystrophin quantification by western blotting requires a large number of cultured cells; so fewer compounds are as thoroughly screened as is desirable. We aimed to develop a quantitative assessment tool using fewer cells to contribute in the study of dystrophin and to identify better drug candidates. An 'in-cell western' assay is a quantitative immunofluorescence assay performed in cell culture microplates that allows protein quantification directly in culture, allowing a higher number of experimental repeats and throughput. We have optimized the assay ('myoblot') to be applied to the study of differentiated myoblast cultures. After an exhaustive optimization of the technique to adapt it to the growth and differentiation rates of our cultures and the low intrinsic expression of our proteins of interests, our myoblot protocol allows the quantification of dystrophin and other muscle-associated proteins in muscle cell cultures. We are able to distinguish accurately between the different sets of patients based on their dystrophin expression and detect dystrophin restoration after treatment. We expect that this new tool to quantify muscle proteins in DMD and other muscle disorders will aid in their diagnosis and in the development of new therapies. © 2017 British Neuropathological Society.

  18. Sludge quantification at water treatment plant and its management scenario.

    Science.gov (United States)

    Ahmad, Tarique; Ahmad, Kafeel; Alam, Mehtab

    2017-08-15

    Large volume of sludge is generated at the water treatment plants during the purification of surface water for potable supplies. Handling and disposal of sludge require careful attention from civic bodies, plant operators, and environmentalists. Quantification of the sludge produced at the treatment plants is important to develop suitable management strategies for its economical and environment friendly disposal. Present study deals with the quantification of sludge using empirical relation between turbidity, suspended solids, and coagulant dosing. Seasonal variation has significant effect on the raw water quality received at the water treatment plants so forth sludge generation also varies. Yearly production of the sludge in a water treatment plant at Ghaziabad, India, is estimated to be 29,700 ton. Sustainable disposal of such a quantity of sludge is a challenging task under stringent environmental legislation. Several beneficial reuses of sludge in civil engineering and constructional work have been identified globally such as raw material in manufacturing cement, bricks, and artificial aggregates, as cementitious material, and sand substitute in preparing concrete and mortar. About 54 to 60% sand, 24 to 28% silt, and 16% clay constitute the sludge generated at the water treatment plant under investigation. Characteristics of the sludge are found suitable for its potential utilization as locally available construction material for safe disposal. An overview of the sustainable management scenario involving beneficial reuses of the sludge has also been presented.

  19. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  20. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-07

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a continuous-time Markov chain whose states represent the molecular counts of various species. For such models, effects of parameter uncertainty are often quantified by estimating the infinitesimal sensitivities of some observables with respect to model parameters. The aim of this talk is to present a holistic approach towards this problem of estimating parameter sensitivities for stochastic reaction networks. Our approach is based on a generic formula which allows us to construct efficient estimators for parameter sensitivity using simulations of the underlying model. We will discuss how novel simulation techniques, such as tau-leaping approximations, multi-level methods etc. can be easily integrated with our approach and how one can deal with stiff reaction networks where reactions span multiple time-scales. We will demonstrate the efficiency and applicability of our approach using many examples from the biological literature.

  1. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  2. Rapid Quantification and Validation of Lipid Concentrations within Liposomes

    Directory of Open Access Journals (Sweden)

    Carla B. Roces

    2016-09-01

    Full Text Available Quantification of the lipid content in liposomal adjuvants for subunit vaccine formulation is of extreme importance, since this concentration impacts both efficacy and stability. In this paper, we outline a high performance liquid chromatography-evaporative light scattering detector (HPLC-ELSD method that allows for the rapid and simultaneous quantification of lipid concentrations within liposomal systems prepared by three liposomal manufacturing techniques (lipid film hydration, high shear mixing, and microfluidics. The ELSD system was used to quantify four lipids: 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC, cholesterol, dimethyldioctadecylammonium (DDA bromide, and ᴅ-(+-trehalose 6,6′-dibehenate (TDB. The developed method offers rapidity, high sensitivity, direct linearity, and a good consistency on the responses (R2 > 0.993 for the four lipids tested. The corresponding limit of detection (LOD and limit of quantification (LOQ were 0.11 and 0.36 mg/mL (DMPC, 0.02 and 0.80 mg/mL (cholesterol, 0.06 and 0.20 mg/mL (DDA, and 0.05 and 0.16 mg/mL (TDB, respectively. HPLC-ELSD was shown to be a rapid and effective method for the quantification of lipids within liposome formulations without the need for lipid extraction processes.

  3. Late Noachian fluvial erosion on Mars: Cumulative water volumes required to carve the valley networks and grain size of bed-sediment

    Science.gov (United States)

    Rosenberg, Eliott N.; Head, James W., III

    2015-11-01

    Our goal is to quantify the cumulative water volume that was required to carve the Late Noachian valley networks on Mars. We employ an improved methodology in which fluid/sediment flux ratios are based on empirical data, not assumed. We use a large quantity of data from terrestrial rivers to assess the variability of actual fluid/sediment flux sediment ratios. We find the flow depth by using an empirical relationship to estimate the fluid flux from the estimated channel width, and then using estimated grain sizes (theoretical sediment grain size predictions and comparison with observations by the Curiosity rover) to find the flow depth to which the resulting fluid flux corresponds. Assuming that the valley networks contained alluvial bed rivers, we find, from their current slopes and widths, that the onset of suspended transport occurs near the sand-gravel boundary. Thus, any bed sediment must have been fine gravel or coarser, whereas fine sediment would be carried downstream. Subsequent to the cessation of fluvial activity, aeolian processes have partially redistributed fine-grain particles in the valleys, often forming dunes. It seems likely that the dominant bed sediment size was near the threshold for suspension, and assuming that this was the case could make our final results underestimates, which is the same tendency that our other assumptions have. Making this assumption, we find a global equivalent layer (GEL) of 3-100 m of water to be the most probable cumulative volume that passed through the valley networks. This value is similar to the ∼34 m water GEL currently on the surface and in the near-surface in the form of ice. Note that the amount of water required to carve the valley networks could represent the same water recycled through a surface valley network hydrological system many times in separate or continuous precipitation/runoff/collection/evaporation/precipitation cycles.

  4. Ct shift: A novel and accurate real-time PCR quantification model for direct comparison of different nucleic acid sequences and its application for transposon quantifications.

    Science.gov (United States)

    Kolacsek, Orsolya; Pergel, Enikő; Varga, Nóra; Apáti, Ágota; Orbán, Tamás I

    2017-01-20

    There are numerous applications of quantitative PCR for both diagnostic and basic research. As in many other techniques the basis of quantification is that comparisons are made between different (unknown and known or reference) specimens of the same entity. When the aim is to compare real quantities of different species in samples, one cannot escape their separate precise absolute quantification. We have established a simple and reliable method for this purpose (Ct shift method) which combines the absolute and the relative approach. It requires a plasmid standard containing both sequences of amplicons to be compared (e.g. the target of interest and the endogenous control). It can serve as a reference sample with equal copies of templates for both targets. Using the ΔΔCt formula we can quantify the exact ratio of the two templates in each unknown sample. The Ct shift method has been successfully applied for transposon gene copy measurements, as well as for comparison of different mRNAs in cDNA samples. This study provides the proof of concept and introduces some potential applications of the method; the absolute nature of results even without the need for real reference samples can contribute to the universality of the method and comparability of different studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Growth Limits in Large Scale Networks

    DEFF Research Database (Denmark)

    Knudsen, Thomas Phillip

    limitations. The rising complexity of network management with the convergence of communications platforms is shown as problematic for both automatic management feasibility and for manpower resource management. In the fourth step the scope is extended to include the present society with the DDN project as its......The Subject of large scale networks is approached from the perspective of the network planner. An analysis of the long term planning problems is presented with the main focus on the changing requirements for large scale networks and the potential problems in meeting these requirements. The problems...... the fundamental technological resources in network technologies are analysed for scalability. Here several technological limits to continued growth are presented. The third step involves a survey of major problems in managing large scale networks given the growth of user requirements and the technological...

  6. Superlattice band structure: New and simple energy quantification condition

    Energy Technology Data Exchange (ETDEWEB)

    Maiz, F., E-mail: fethimaiz@gmail.com [University of Cartage, Nabeul Engineering Preparatory Institute, Merazka, 8000 Nabeul (Tunisia); King Khalid University, Faculty of Science, Physics Department, P.O. Box 9004, Abha 61413 (Saudi Arabia)

    2014-10-01

    Assuming an approximated effective mass and using Bastard's boundary conditions, a simple method is used to calculate the subband structure for periodic semiconducting heterostructures. Our method consists to derive and solve the energy quantification condition (EQC), this is a simple real equation, composed of trigonometric and hyperbolic functions, and does not need any programming effort or sophistic machine to solve it. For less than ten wells heterostructures, we have derived and simplified the energy quantification conditions. The subband is build point by point; each point presents an energy level. Our simple energy quantification condition is used to calculate the subband structure of the GaAs/Ga{sub 0.5}Al{sub 0.5}As heterostructures, and build its subband point by point for 4 and 20 wells. Our finding shows a good agreement with previously published results.

  7. Quantification of rapid Myosin regulatory light chain phosphorylation using high-throughput in-cell Western assays: comparison to Western immunoblots.

    Directory of Open Access Journals (Sweden)

    Hector N Aguilar

    2010-04-01

    Full Text Available Quantification of phospho-proteins (PPs is crucial when studying cellular signaling pathways. Western immunoblotting (WB is commonly used for the measurement of relative levels of signaling intermediates in experimental samples. However, WB is in general a labour-intensive and low-throughput technique. Because of variability in protein yield and phospho-signal preservation during protein harvesting, and potential loss of antigen during protein transfer, WB provides only semi-quantitative data. By comparison, the "in-cell western" (ICW technique has high-throughput capacity and requires less extensive sample preparation. Thus, we compared the ICW technique to WB for measuring phosphorylated myosin regulatory light chain (PMLC(20 in primary cultures of uterine myocytes to assess their relative specificity, sensitivity, precision, and quantification of biologically relevant responses.ICWs are cell-based microplate assays for quantification of protein targets in their cellular context. ICWs utilize a two-channel infrared (IR scanner (Odyssey(R to quantify signals arising from near-infrared (NIR fluorophores conjugated to secondary antibodies. One channel is dedicated to measuring the protein of interest and the second is used for data normalization of the signal in each well of the microplate. Using uterine myocytes, we assessed oxytocin (OT-stimulated MLC(20 phosphorylation measured by ICW and WB, both using NIR fluorescence. ICW and WB data were comparable regarding signal linearity, signal specificity, and time course of phosphorylation response to OT.ICW and WB yield comparable biological data. The advantages of ICW over WB are its high-throughput capacity, improved precision, and reduced sample preparation requirements. ICW might provide better sensitivity and precision with low-quantity samples or for protocols requiring large numbers of samples. These features make the ICW technique an excellent tool for the study of phosphorylation endpoints

  8. Software Defined Networking Demands on Software Technologies

    DEFF Research Database (Denmark)

    Galinac Grbac, T.; Caba, Cosmin Marius; Soler, José

    2015-01-01

    Software Defined Networking (SDN) is a networking approach based on a centralized control plane architecture with standardised interfaces between control and data planes. SDN enables fast configuration and reconfiguration of the network to enhance resource utilization and service performances....... This new approach enables a more dynamic and flexible network, which may adapt to user needs and application requirements. To this end, systemized solutions must be implemented in network software, aiming to provide secure network services that meet the required service performance levels. In this paper......, we review this new approach to networking from an architectural point of view, and identify and discuss some critical quality issues that require new developments in software technologies. These issues we discuss along with use case scenarios. Here in this paper we aim to identify challenges...

  9. Quantification bias caused by plasmid DNA conformation in quantitative real-time PCR assay.

    Science.gov (United States)

    Lin, Chih-Hui; Chen, Yu-Chieh; Pan, Tzu-Ming

    2011-01-01

    Quantitative real-time PCR (qPCR) is the gold standard for the quantification of specific nucleic acid sequences. However, a serious concern has been revealed in a recent report: supercoiled plasmid standards cause significant over-estimation in qPCR quantification. In this study, we investigated the effect of plasmid DNA conformation on the quantification of DNA and the efficiency of qPCR. Our results suggest that plasmid DNA conformation has significant impact on the accuracy of absolute quantification by qPCR. DNA standard curves shifted significantly among plasmid standards with different DNA conformations. Moreover, the choice of DNA measurement method and plasmid DNA conformation may also contribute to the measurement error of DNA standard curves. Due to the multiple effects of plasmid DNA conformation on the accuracy of qPCR, efforts should be made to assure the highest consistency of plasmid standards for qPCR. Thus, we suggest that the conformation, preparation, quantification, purification, handling, and storage of standard plasmid DNA should be described and defined in the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) to assure the reproducibility and accuracy of qPCR absolute quantification.

  10. Quantification of biopharmaceuticals and biomarkers in complex biological matrices: a comparison of liquid chromatography coupled to tandem mass spectrometry and ligand binding assays

    NARCIS (Netherlands)

    Bults, Peter; van de Merbel, Nico C; Bischoff, Rainer

    2015-01-01

    The quantification of proteins (biopharmaceuticals or biomarkers) in complex biological samples such as blood plasma requires exquisite sensitivity and selectivity, as all biological matrices contain myriads of proteins that are all made of the same 20 proteinogenic amino acids, notwithstanding

  11. The distribution of time for Clark's flow and risk assessment for the activities of pert network structure

    Directory of Open Access Journals (Sweden)

    Letić Duško

    2009-01-01

    Full Text Available This paper presents the ways of quantification of flow time qualifications that can be used for planning or other stochastic processes by employing Clark's methods, central limit theorem and Monte Carlo simulation. The results of theoretical researches on superponed flow time quantification for complex activities and events flow in PERT network for project management are also presented. By extending Clark's research we have made a generalization of flow models for parallel and ordinal activities and events and specifically for their critical and subcritical paths. This can prevent planning errors and decrease the project realization risk. The software solution is based on Clark's equations and Monte Carlo simulation. The numerical experiment is conducted using Mathcad Professional.

  12. Experimental design for TBT quantification by isotope dilution SPE-GC-ICP-MS under the European water framework directive.

    Science.gov (United States)

    Alasonati, Enrica; Fabbri, Barbara; Fettig, Ina; Yardin, Catherine; Del Castillo Busto, Maria Estela; Richter, Janine; Philipp, Rosemarie; Fisicaro, Paola

    2015-03-01

    In Europe the maximum allowable concentration for tributyltin (TBT) compounds in surface water has been regulated by the water framework directive (WFD) and daughter directive that impose a limit of 0.2 ng L(-1) in whole water (as tributyltin cation). Despite the large number of different methodologies for the quantification of organotin species developed in the last two decades, standardised analytical methods at required concentration level do not exist. TBT quantification at picogram level requires efficient and accurate sample preparation and preconcentration, and maximum care to avoid blank contamination. To meet the WFD requirement, a method for the quantification of TBT in mineral water at environmental quality standard (EQS) level, based on solid phase extraction (SPE), was developed and optimised. The quantification was done using species-specific isotope dilution (SSID) followed by gas chromatography (GC) coupled to inductively coupled plasma mass spectrometry (ICP-MS). The analytical process was optimised using a design of experiment (DOE) based on a factorial fractionary plan. The DOE allowed to evaluate 3 qualitative factors (type of stationary phase and eluent, phase mass and eluent volume, pH and analyte ethylation procedure) for a total of 13 levels studied, and a sample volume in the range of 250-1000 mL. Four different models fitting the results were defined and evaluated with statistic tools: one of them was selected and optimised to find the best procedural conditions. C18 phase was found to be the best stationary phase for SPE experiments. The 4 solvents tested with C18, the pH and ethylation conditions, the mass of the phases, the volume of the eluents and the sample volume can all be optimal, but depending on their respective combination. For that reason, the equation of the model conceived in this work is a useful decisional tool for the planning of experiments, because it can be applied to predict the TBT mass fraction recovery when the

  13. Self-Awareness in Computer Networks

    Directory of Open Access Journals (Sweden)

    Ariane Keller

    2014-01-01

    Full Text Available The Internet architecture works well for a wide variety of communication scenarios. However, its flexibility is limited because it was initially designed to provide communication links between a few static nodes in a homogeneous network and did not attempt to solve the challenges of today’s dynamic network environments. Although the Internet has evolved to a global system of interconnected computer networks, which links together billions of heterogeneous compute nodes, its static architecture remained more or less the same. Nowadays the diversity in networked devices, communication requirements, and network conditions vary heavily, which makes it difficult for a static set of protocols to provide the required functionality. Therefore, we propose a self-aware network architecture in which protocol stacks can be built dynamically. Those protocol stacks can be optimized continuously during communication according to the current requirements. For this network architecture we propose an FPGA-based execution environment called EmbedNet that allows for a dynamic mapping of network protocols to either hardware or software. We show that our architecture can reduce the communication overhead significantly by adapting the protocol stack and that the dynamic hardware/software mapping of protocols considerably reduces the CPU load introduced by packet processing.

  14. Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.

    2009-01-01

    This paper focuses on a generic model for service network design, which includes asset positioning and utilization through constraints on asset availability at terminals. We denote these relations as "design-balance constraints" and focus on the design-balanced capacitated multicommodity network...... design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation....... Results on a wide range of network design problem instances from the literature indicate the proposed method behaves very well in terms of computational efficiency and solution quality....

  15. Future Wireless Network: MyNET Platform and End-to-End Network Slicing

    OpenAIRE

    Zhang, Hang

    2016-01-01

    Future wireless networks are facing new challenges. These new challenges require new solutions and strategies of the network deployment, management, and operation. Many driving factors are decisive in the re-definition and re-design of the future wireless network architecture. In the previously published paper "5G Wireless Network - MyNET and SONAC", MyNET and SONAC, a future network architecture, are described. This paper elaborates MyNET platform with more details. The design principles of ...

  16. Learning OpenStack networking (Neutron)

    CERN Document Server

    Denton, James

    2014-01-01

    If you are an OpenStack-based cloud operator with experience in OpenStack Compute and nova-network but are new to Neutron networking, then this book is for you. Some networking experience is recommended, and a physical network infrastructure is required to provide connectivity to instances and other network resources configured in the book.

  17. Cascading Failures and Recovery in Networks of Networks

    Science.gov (United States)

    Havlin, Shlomo

    Network science have been focused on the properties of a single isolated network that does not interact or depends on other networks. In reality, many real-networks, such as power grids, transportation and communication infrastructures interact and depend on other networks. I will present a framework for studying the vulnerability and the recovery of networks of interdependent networks. In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This is also the case when some nodes like certain locations play a role in two networks -multiplex. This may happen recursively and can lead to a cascade of failures and to a sudden fragmentation of the system. I will present analytical solutions for the critical threshold and the giant component of a network of n interdependent networks. I will show, that the general theory has many novel features that are not present in the classical network theory. When recovery of components is possible global spontaneous recovery of the networks and hysteresis phenomena occur and the theory suggests an optimal repairing strategy of system of systems. I will also show that interdependent networks embedded in space are significantly more vulnerable compared to non embedded networks. In particular, small localized attacks may lead to cascading failures and catastrophic consequences.Thus, analyzing data of real network of networks is highly required to understand the system vulnerability. DTRA, ONR, Israel Science Foundation.

  18. Next Generation Social Networks

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Skouby, Knud Erik

    2008-01-01

    different online networks for communities of people who share interests or individuals who presents themselves through user produced content is what makes up the social networking of today. The purpose of this paper is to discuss perceived user requirements to the next generation social networks. The paper...

  19. Network-Centric Applications and Tactical Networks

    National Research Council Canada - National Science Library

    Krout, Timothy; Durbano, Steven; Shearer, Ruth

    2003-01-01

    .... Command and control applications will always require communications capabilities. There are numerous examples of command and control applications that have been developed without adequate attention to the realities of tactical networks...

  20. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  1. Optical storage networking

    Science.gov (United States)

    Mohr, Ulrich

    2001-11-01

    For efficient business continuance and backup of mission- critical data an inter-site storage network is required. Where traditional telecommunications costs are prohibitive for all but the largest organizations, there is an opportunity for regional carries to deliver an innovative storage service. This session reveals how a combination of optical networking and protocol-aware SAN gateways can provide an extended storage networking platform with the lowest cost of ownership and the highest possible degree of reliability, security and availability. Companies of every size, with mainframe and open-systems environments, can afford to use this integrated service. Three mayor applications are explained; channel extension, Network Attached Storage (NAS), Storage Area Networks (SAN) and how optical networks address the specific requirements. One advantage of DWDM is the ability for protocols such as ESCON, Fibre Channel, ATM and Gigabit Ethernet, to be transported natively and simultaneously across a single fiber pair, and the ability to multiplex many individual fiber pairs over a single pair, thereby reducing fiber cost and recovering fiber pairs already in use. An optical storage network enables a new class of service providers, Storage Service Providers (SSP) aiming to deliver value to the enterprise by managing storage, backup, replication and restoration as an outsourced service.

  2. Analysis and Reduction of Complex Networks Under Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Knio, Omar M

    2014-04-09

    This is a collaborative proposal that aims at developing new methods for the analysis and reduction of complex multiscale networks under uncertainty. The approach is based on combining methods of computational singular perturbation (CSP) and probabilistic uncertainty quantification. In deterministic settings, CSP yields asymptotic approximations of reduced-dimensionality “slow manifolds” on which a multiscale dynamical system evolves. Introducing uncertainty raises fundamentally new issues, particularly concerning its impact on the topology of slow manifolds, and means to represent and quantify associated variability. To address these challenges, this project uses polynomial chaos (PC) methods to reformulate uncertain network models, and to analyze them using CSP in probabilistic terms. Specific objectives include (1) developing effective algorithms that can be used to illuminate fundamental and unexplored connections among model reduction, multiscale behavior, and uncertainty, and (2) demonstrating the performance of these algorithms through applications to model problems.

  3. Quantification of risk considering external events on the change of allowed outage time and the preventive maintenance during power operation

    Energy Technology Data Exchange (ETDEWEB)

    Kang, D. J.; Kim, K. Y.; Yang, J. E

    2001-03-01

    In this study, for the major safety systems of Ulchin Units 3/4, we quantify the risk on the change of AOT and the PM during power operation to identify the effects on the results of external events PSA when nuclear power plant changes such as allowed outage time are requested. The systems for which the risks on the change of allowed outage time are quantified are High Pressure Safety Injection System (HPSIS), Containment Spray System (CSS), and Emergency Diesel Generator (EDG). The systems for which the risks on the PM during power operation are Low Pressure Safety Injection System (LPSIS), CSS, EDG, Essential Service Water System (ESWS). Following conclusions can be obtained through this study: 1)The increase of core damage frequency ({delta}CDF) on the change of AOT and the conditional core damage probability (CCDP) on the on-line PM of each system are differently quantified according to the cases of considering only internal events or only external events. . 2)It is expected that the quantification of risk including internal and external events is advantageous for the licensee of NPP if the regulatory acceptance criteria for the technical specification changes are relatively set up. However, it is expected to be disadvantageous for the licensee if the acceptance criteria are absolutely set up. 3)It is expected that the conduction on the quantification of only a fire event is sufficient when the quantification of external events PSA model is required for the plant changes of Korea Standard NPPs. 4)It is expected that the quantification of the increase of core damage frequency and the incremental conditional core damage probability on technical specification changes are not needed if the quantification results of those considering only internal events are below regulatory acceptance criteria and the external events PSA results are not greatly affected by the system availability. However, it is expected that the quantification of risk considering external events

  4. Quantification of risk considering external events on the change of allowed outage time and the preventive maintenance during power operation

    International Nuclear Information System (INIS)

    Kang, D. J.; Kim, K. Y.; Yang, J. E.

    2001-03-01

    In this study, for the major safety systems of Ulchin Units 3/4, we quantify the risk on the change of AOT and the PM during power operation to identify the effects on the results of external events PSA when nuclear power plant changes such as allowed outage time are requested. The systems for which the risks on the change of allowed outage time are quantified are High Pressure Safety Injection System (HPSIS), Containment Spray System (CSS), and Emergency Diesel Generator (EDG). The systems for which the risks on the PM during power operation are Low Pressure Safety Injection System (LPSIS), CSS, EDG, Essential Service Water System (ESWS). Following conclusions can be obtained through this study: 1)The increase of core damage frequency (ΔCDF) on the change of AOT and the conditional core damage probability (CCDP) on the on-line PM of each system are differently quantified according to the cases of considering only internal events or only external events. . 2)It is expected that the quantification of risk including internal and external events is advantageous for the licensee of NPP if the regulatory acceptance criteria for the technical specification changes are relatively set up. However, it is expected to be disadvantageous for the licensee if the acceptance criteria are absolutely set up. 3)It is expected that the conduction on the quantification of only a fire event is sufficient when the quantification of external events PSA model is required for the plant changes of Korea Standard NPPs. 4)It is expected that the quantification of the increase of core damage frequency and the incremental conditional core damage probability on technical specification changes are not needed if the quantification results of those considering only internal events are below regulatory acceptance criteria and the external events PSA results are not greatly affected by the system availability. However, it is expected that the quantification of risk considering external events on

  5. A method to quantify infectious airborne pathogens at concentrations below the threshold of quantification by culture

    Science.gov (United States)

    Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.

    2013-01-01

    In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399

  6. Cross-layer design in optical networks

    CERN Document Server

    Brandt-Pearce, Maïté; Demeester, Piet; Saradhi, Chava

    2013-01-01

    Optical networks have become an integral part of the communications infrastructure needed to support society’s demand for high-speed connectivity.  Cross-Layer Design in Optical Networks addresses topics in optical network design and analysis with a focus on physical-layer impairment awareness and network layer service requirements, essential for the implementation and management of robust scalable networks.  The cross-layer treatment includes bottom-up impacts of the physical and lambda layers, such as dispersion, noise, nonlinearity, crosstalk, dense wavelength packing, and wavelength line rates, as well as top-down approaches to handle physical-layer impairments and service requirements.

  7. FRANX. Application for analysis and quantification of the APS fire

    International Nuclear Information System (INIS)

    Snchez, A.; Osorio, F.; Ontoso, N.

    2014-01-01

    The FRANX application has been developed by EPRI within the Risk and Reliability User Group in order to facilitate the process of quantification and updating APS Fire (also covers floods and earthquakes). By applying fire scenarios are quantified in the central integrating the tasks performed during the APS fire. This paper describes the main features of the program to allow quantification of an APS Fire. (Author)

  8. Quantification of heterogeneity as a biomarker in tumor imaging: a systematic review.

    Directory of Open Access Journals (Sweden)

    Lejla Alic

    Full Text Available BACKGROUND: Many techniques are proposed for the quantification of tumor heterogeneity as an imaging biomarker for differentiation between tumor types, tumor grading, response monitoring and outcome prediction. However, in clinical practice these methods are barely used. This study evaluates the reported performance of the described methods and identifies barriers to their implementation in clinical practice. METHODOLOGY: The Ovid, Embase, and Cochrane Central databases were searched up to 20 September 2013. Heterogeneity analysis methods were classified into four categories, i.e., non-spatial methods (NSM, spatial grey level methods (SGLM, fractal analysis (FA methods, and filters and transforms (F&T. The performance of the different methods was compared. PRINCIPAL FINDINGS: Of the 7351 potentially relevant publications, 209 were included. Of these studies, 58% reported the use of NSM, 49% SGLM, 10% FA, and 28% F&T. Differentiation between tumor types, tumor grading and/or outcome prediction was the goal in 87% of the studies. Overall, the reported area under the curve (AUC ranged from 0.5 to 1 (median 0.87. No relation was found between the performance and the quantification methods used, or between the performance and the imaging modality. A negative correlation was found between the tumor-feature ratio and the AUC, which is presumably caused by overfitting in small datasets. Cross-validation was reported in 63% of the classification studies. Retrospective analyses were conducted in 57% of the studies without a clear description. CONCLUSIONS: In a research setting, heterogeneity quantification methods can differentiate between tumor types, grade tumors, and predict outcome and monitor treatment effects. To translate these methods to clinical practice, more prospective studies are required that use external datasets for validation: these datasets should be made available to the community to facilitate the development of new and improved

  9. Quantification of aortic regurgitation by magnetic resonance velocity mapping

    DEFF Research Database (Denmark)

    Søndergaard, Lise; Lindvig, K; Hildebrandt, P

    1993-01-01

    The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients, and the regurgit......The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients...

  10. Review of complex networks application in hydroclimatic extremes with an implementation to characterize spatio-temporal drought propagation in continental USA

    Science.gov (United States)

    Konapala, Goutam; Mishra, Ashok

    2017-12-01

    The quantification of spatio-temporal hydroclimatic extreme events is a key variable in water resources planning, disaster mitigation, and preparing climate resilient society. However, quantification of these extreme events has always been a great challenge, which is further compounded by climate variability and change. Recently complex network theory was applied in earth science community to investigate spatial connections among hydrologic fluxes (e.g., rainfall and streamflow) in water cycle. However, there are limited applications of complex network theory for investigating hydroclimatic extreme events. This article attempts to provide an overview of complex networks and extreme events, event synchronization method, construction of networks, their statistical significance and the associated network evaluation metrics. For illustration purpose, we apply the complex network approach to study the spatio-temporal evolution of droughts in Continental USA (CONUS). A different drought threshold leads to a new drought event as well as different socio-economic implications. Therefore, it would be interesting to explore the role of thresholds on spatio-temporal evolution of drought through network analysis. In this study, long term (1900-2016) Palmer drought severity index (PDSI) was selected for spatio-temporal drought analysis using three network-based metrics (i.e., strength, direction and distance). The results indicate that the drought events propagate differently at different thresholds associated with initiation of drought events. The direction metrics indicated that onset of mild drought events usually propagate in a more spatially clustered and uniform approach compared to onsets of moderate droughts. The distance metric shows that the drought events propagate for longer distance in western part compared to eastern part of CONUS. We believe that the network-aided metrics utilized in this study can be an important tool in advancing our knowledge on drought

  11. Quantification by aberration corrected (S)TEM of boundaries formed by symmetry breaking phase transformations

    Energy Technology Data Exchange (ETDEWEB)

    Schryvers, D., E-mail: nick.schryvers@uantwerpen.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Salje, E.K.H. [Department of Earth Sciences, University of Cambridge, Cambridge CB2 3EQ (United Kingdom); Nishida, M. [Department of Engineering Sciences for Electronics and Materials, Faculty of Engineering Sciences, Kyushu University, Kasuga, Fukuoka 816-8580 (Japan); De Backer, A. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Idrissi, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Institute of Mechanics, Materials and Civil Engineering, Université Catholique de Louvain, Place Sainte Barbe, 2, B-1348, Louvain-la-Neuve (Belgium); Van Aert, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2017-05-15

    The present contribution gives a review of recent quantification work of atom displacements, atom site occupations and level of crystallinity in various systems and based on aberration corrected HR(S)TEM images. Depending on the case studied, picometer range precisions for individual distances can be obtained, boundary widths at the unit cell level determined or statistical evolutions of fractions of the ordered areas calculated. In all of these cases, these quantitative measures imply new routes for the applications of the respective materials. - Highlights: • Quantification of picometer displacements at ferroelastic twin boundary in CaTiO{sub 3.} • Quantification of kinks in meandering ferroelectric domain wall in LiNbO{sub 3}. • Quantification of column occupation in anti-phase boundary in Co-Pt. • Quantification of atom displacements at twin boundary in Ni-Ti B19′ martensite.

  12. Validation of an HPLC method for quantification of total quercetin in Calendula officinalis extracts

    International Nuclear Information System (INIS)

    Muñoz Muñoz, John Alexander; Morgan Machado, Jorge Enrique; Trujillo González, Mary

    2015-01-01

    Introduction: calendula officinalis extracts are used as natural raw material in a wide range of pharmaceutical and cosmetic preparations; however, there are no official methods for quality control of these extracts. Objective: to validate an HPLC-based analytical method for quantification total quercetin in glycolic and hydroalcoholic extracts of Calendula officinalis. Methods: to quantify total quercetin content in the matrices, it was necessary to hydrolyze flavonoid glycosides under optimal conditions. The chromatographic separation was performed on a C-18 SiliaChrom 4.6x150 mm 5 µm column, adapted to a SiliaChrom 5 um C-18 4.6x10 mm precolumn, with UV detection at 370 nm. The gradient elution was performed with a mobile phase consisting of methanol (MeOH) and phosphoric acid (H 3 PO 4 ) (0.08 % w/v). The quantification was performed through the external standard method and comparison with quercetin reference standard. Results: the studied method selectivity against extract components and degradation products under acid/basic hydrolysis, oxidation and light exposure conditions showed no signals that interfere with the quercetin quantification. It was statistically proved that the method is linear from 1.0 to 5.0 mg/mL. Intermediate precision expressed as a variation coefficient was 1.8 and 1.74 % and the recovery percentage was 102.15 and 101.32 %, for glycolic and hydroalcoholic extracts, respectively. Conclusions: the suggested methodology meets the quality parameters required for quantifying total quercetin, which makes it a useful tool for quality control of C. officinalis extracts. (author)

  13. Wrist sensor-based tremor severity quantification in Parkinson's disease using convolutional neural network.

    Science.gov (United States)

    Kim, Han Byul; Lee, Woong Woo; Kim, Aryun; Lee, Hong Ji; Park, Hye Young; Jeon, Hyo Seon; Kim, Sang Kyong; Jeon, Beomseok; Park, Kwang S

    2018-04-01

    Tremor is a commonly observed symptom in patients of Parkinson's disease (PD), and accurate measurement of tremor severity is essential in prescribing appropriate treatment to relieve its symptoms. We propose a tremor assessment system based on the use of a convolutional neural network (CNN) to differentiate the severity of symptoms as measured in data collected from a wearable device. Tremor signals were recorded from 92 PD patients using a custom-developed device (SNUMAP) equipped with an accelerometer and gyroscope mounted on a wrist module. Neurologists assessed the tremor symptoms on the Unified Parkinson's Disease Rating Scale (UPDRS) from simultaneously recorded video footages. The measured data were transformed into the frequency domain and used to construct a two-dimensional image for training the network, and the CNN model was trained by convolving tremor signal images with kernels. The proposed CNN architecture was compared to previously studied machine learning algorithms and found to outperform them (accuracy = 0.85, linear weighted kappa = 0.85). More precise monitoring of PD tremor symptoms in daily life could be possible using our proposed method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. A User Driven Dynamic Circuit Network Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Guok, Chin; Robertson, David; Chaniotakis, Evangelos; Thompson, Mary; Johnston, William; Tierney, Brian

    2008-10-01

    The requirements for network predictability are becoming increasingly critical to the DoE science community where resources are widely distributed and collaborations are world-wide. To accommodate these emerging requirements, the Energy Sciences Network has established a Science Data Network to provide user driven guaranteed bandwidth allocations. In this paper we outline the design, implementation, and secure coordinated use of such a network, as well as some lessons learned.

  15. The use of self-quantification systems for personal health information: big data management activities and prospects.

    Science.gov (United States)

    Almalki, Manal; Gray, Kathleen; Sanchez, Fernando Martin

    2015-01-01

    Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Self-quantification in personal health maintenance

  16. Clinical applications of MS-based protein quantification.

    Science.gov (United States)

    Sabbagh, Bassel; Mindt, Sonani; Neumaier, Michael; Findeisen, Peter

    2016-04-01

    Mass spectrometry-based assays are increasingly important in clinical laboratory medicine and nowadays are already commonly used in several areas of routine diagnostics. These include therapeutic drug monitoring, toxicology, endocrinology, pediatrics, and microbiology. Accordingly, some of the most common analyses are therapeutic drug monitoring of immunosuppressants, vitamin D, steroids, newborn screening, and bacterial identification. However, MS-based quantification of peptides and proteins for routine diagnostic use is rather rare up to now despite excellent analytical specificity and good sensitivity. Here, we want to give an overview over current fit-for-purpose assays for MS-based protein quantification. Advantages as well as challenges of this approach will be discussed with focus on feasibility for routine diagnostic use. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Requirements of the integration of renewable energy into network charge regulation. Proposals for the further development of the network charge system. Final report; Anforderungen der Integration der erneuerbaren Energien an die Netzentgeltregulierung. Vorschlaege zur Weiterentwicklung des Netzentgeltsystems. Endbericht

    Energy Technology Data Exchange (ETDEWEB)

    Friedrichsen, Nele; Klobasa, Marian; Marwitz, Simon [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Hilpert, Johannes; Sailer, Frank [Stiftung Umweltenergierecht, Wuerzburg (Germany)

    2016-11-15

    In this project we analyzed options to advance the network tariff system to support the German energy transition. A power system with high shares of renewables, requires more flexibility of supply and demand than the traditional system based on centralized, fossil power plants. Further, the power networks need to be adjusted and expanded. The transformation should aim at system efficiency i.e. look at both generation and network development. Network tariffs allocate the network cost towards network users. They also should provide incentives, e.g. to reduce peak load in periods of network congestion. Inappropriate network tariffs can hinder the provision of flexibility and thereby become a barrier towards system integration of renewable. Against this background, this report presents a systematic review of the German network tariff system and a discussion of several options to adapt the network tarif system in order to support the energy transition. The following aspects are analyzed: An adjustment of the privileges for industrial users to increase potential network benefits and reduce barriers towards a more market oriented behaviour. The payments for avoided network charges to distributed generation, that do not reflect cost reality in distribution networks anymore. Uniform transmission network tariffs as an option for a more appropriate allocation of cost associated with the energy transition. Increased standing fees in low voltage networks as an option to increase the cost-contribution of users with self-generation to network financing. Generator tariffs, to allocate a share of network cost to generators and provide incentives for network oriented location choice and/or feed-in.

  18. Quantitative Nuclear Medicine Imaging: Concepts, Requirements and Methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-01-15

    more consistent compensation for physical effects and imaging system limitations. On these grounds, quantitative imaging is now a broad field of work for the scientific community, and its current translation to the clinical environment can be undertaken with confidence, for better and more accurate diagnostic and therapeutic applications using consistent and well validated protocols. This publication complements previous efforts of the IAEA related to activity measurement and quantification. The quantitative measurement of tissues and other biological samples is addressed in Technical Reports Series No. 454. The quality control requirements of current PET and SPECT imaging equipment are addressed in IAEA Human Health Series No. 1 and No. 6, respectively. This report does not cover the fields addressed by these publications.

  19. A Conserved Circular Network of Coregulated Lipids Modulates Innate Immune Responses.

    Science.gov (United States)

    Köberlin, Marielle S; Snijder, Berend; Heinz, Leonhard X; Baumann, Christoph L; Fauster, Astrid; Vladimer, Gregory I; Gavin, Anne-Claude; Superti-Furga, Giulio

    2015-07-02

    Lipid composition affects the biophysical properties of membranes that provide a platform for receptor-mediated cellular signaling. To study the regulatory role of membrane lipid composition, we combined genetic perturbations of sphingolipid metabolism with the quantification of diverse steps in Toll-like receptor (TLR) signaling and mass spectrometry-based lipidomics. Membrane lipid composition was broadly affected by these perturbations, revealing a circular network of coregulated sphingolipids and glycerophospholipids. This evolutionarily conserved network architecture simultaneously reflected membrane lipid metabolism, subcellular localization, and adaptation mechanisms. Integration of the diverse TLR-induced inflammatory phenotypes with changes in lipid abundance assigned distinct functional roles to individual lipid species organized across the network. This functional annotation accurately predicted the inflammatory response of cells derived from patients suffering from lipid storage disorders, based solely on their altered membrane lipid composition. The analytical strategy described here empowers the understanding of higher-level organization of membrane lipid function in diverse biological systems. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking

    International Nuclear Information System (INIS)

    Bemis, C.; Erskine, J.; Franey, M.; Greiner, D.; Hoehn, M.; Kaletka, M.; LeVine, M.; Roberson, R.; Welch, L.

    1990-05-01

    This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office

  1. Managerial Challenges Within Networks - Emphasizing the Paradox of Network Participation

    DEFF Research Database (Denmark)

    Jakobsen, Morten

    2003-01-01

    Flexibility and access to numerous resources are essential benefits associated with network participation. An important aspect of managing the network participation of a company is to maintain a dynamic portfolio of partners, and thereby keep up the strategic opportunities for development. However......, maintaining the dynamics within a network seems to be a complex challenge. There is a risk that the network ends up in The Paradox of Network Participation. The desired renewal and flexibility are not utilised because the involved parties preserve the existing networks structure consisting of the same...... and thereby sort out the paradox of network participation. Trust and information are mechanisms employed to absorb uncertainty. The relationship between trust and the requirement for information depends on the maturity of the relationship. When trust becomes too important as uncertainty absorption mechanism...

  2. Managerial challenges within networks: emphasizing the paradox of network participation

    DEFF Research Database (Denmark)

    Jakobsen, Morten

    Flexibility and access to numerous resources are essential benefits associated with network participation. An important aspect of managing the network participation of a company is to maintain a dynamic portfolio of partners, and thereby keep up the strategic opportunities for development. However......, maintaining the dynamics within a network seems to be a complex challenge. There is a risk that the network ends up in The Paradox of Network Participation. The desired renewal and flexibility are not utilised because the involved parties preserve the existing networks structure consisting of the same...... and thereby sort out the paradox of network participation. Trust and information are mechanisms employed to absorb uncertainty. The relationship between trust and the requirement for information depends on the maturity of the relationship. When trust becomes too important as uncertainty absorption mechanism...

  3. Complex Empiricism and the Quantification of Uncertainty in Paleoclimate Reconstructions

    Science.gov (United States)

    Brumble, K. C.

    2014-12-01

    Because the global climate cannot be observed directly, and because of vast and noisy data sets, climate science is a rich field to study how computational statistics informs what it means to do empirical science. Traditionally held virtues of empirical science and empirical methods like reproducibility, independence, and straightforward observation are complicated by representational choices involved in statistical modeling and data handling. Examining how climate reconstructions instantiate complicated empirical relationships between model, data, and predictions reveals that the path from data to prediction does not match traditional conceptions of empirical inference either. Rather, the empirical inferences involved are "complex" in that they require articulation of a good deal of statistical processing wherein assumptions are adopted and representational decisions made, often in the face of substantial uncertainties. Proxy reconstructions are both statistical and paleoclimate science activities aimed at using a variety of proxies to reconstruct past climate behavior. Paleoclimate proxy reconstructions also involve complex data handling and statistical refinement, leading to the current emphasis in the field on the quantification of uncertainty in reconstructions. In this presentation I explore how the processing needed for the correlation of diverse, large, and messy data sets necessitate the explicit quantification of the uncertainties stemming from wrangling proxies into manageable suites. I also address how semi-empirical pseudo-proxy methods allow for the exploration of signal detection in data sets, and as intermediary steps for statistical experimentation.

  4. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  5. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  6. Quantification Model for Estimating Temperature Field Distributions of Apple Fruit

    OpenAIRE

    Zhang , Min; Yang , Le; Zhao , Huizhong; Zhang , Leijie; Zhong , Zhiyou; Liu , Yanling; Chen , Jianhua

    2009-01-01

    International audience; A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature dis...

  7. Network information provision to potential generators: Appendices

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-07-01

    This Code of Practice (CoP) has been prepared to outline the standard of information that Distribution Network Operators (DNOs) should be required to produce in relation to the provision of network maps, schematic diagrams and specific network data. Network information from DNOs may be required by generators (and other customers) in order to assess the potential opportunities available for the connection of new generation plant. Seven Year Statements are published annually by the Transmission Licensees operating in Great Britain, i.e. The National Grid Company, Scottish Power and Scottish Hydro Electric, and contain all the network information relating to each transmission system, e.g. Generation Capacities, System Parameters and Plant Fault Levels. A similar arrangement for DNOs has been outlined in the Electricity Distribution Licence published by Ofgem. Under Condition 25 of the licence, 'The Long Term Development Statement', distribution licence holders are required to make available historic and planned network data. By providing sufficient network information, competition in generation will be improved. At the time of writing, any party interested in assessing distribution network information needs to make contact with the appropriate DNO, identifying the correct department and person. Written applications are then sent to that person, describing the type of network information that is required. Information required from embedded generators by DNOs is specified in detail in both of The Distribution Codes of England and Wales, and Scotland. However, there are no guidelines or details of network information to be provided by DNOs. This Code of Practise is designed to balance this situation and help DNOs, prospective generators and other applicants for information to achieve satisfaction by clarifying expectations. (Author)

  8. An Analysis for the Use of Research and Education Networks and Commercial Network Vendors in Support of Space Based Mission Critical and Non-Critical Networking

    Science.gov (United States)

    Bradford, Robert N.

    2002-01-01

    Currently, and in the past, dedicated communication circuits and "network services" with very stringent performance requirements are being used to support manned and unmanned mission critical ground operations at GSFC, JSC, MSFC, KSC and other NASA facilities. Because of the evolution of network technology, it is time to investigate using other approaches to providing mission services for space ground operations. The current NASA approach is not in keeping with the evolution of network technologies. In the past decade various research and education networks dedicated to scientific and educational endeavors have emerged, as well as commercial networking providers, that employ advanced networking technologies. These technologies have significantly changed networking in recent years. Significant advances in network routing techniques, various topologies and equipment have made commercial networks very stable and virtually error free. Advances in Dense Wave Division Multiplexing will provide tremendous amounts of bandwidth for the future. The question is: Do these networks, which are controlled and managed centrally, provide a level of service that equals the stringent NASA performance requirements. If they do, what are the implication(s) of using them for critical space based ground operations as they are, without adding high cost contractual performance requirements? A second question is the feasibility of applying the emerging grid technology in space operations. Is it feasible to develop a Space Operations Grid and/or a Space Science Grid? Since these network's connectivity is substantial, both nationally and internationally, development of these sorts of grids may be feasible. The concept of research and education networks has evolved to the international community as well. Currently there are international RENs connecting the US in Chicago to and from Europe, South America, Asia and the Pacific rim, Russia and Canada. And most countries in these areas have their

  9. Initial water quantification results using neutron computed tomography

    Science.gov (United States)

    Heller, A. K.; Shi, L.; Brenizer, J. S.; Mench, M. M.

    2009-06-01

    Neutron computed tomography is an important imaging tool in the field of non-destructive testing and in fundamental research for many engineering applications. Contrary to X-rays, neutrons can be attenuated by some light materials, such as hydrogen, but can penetrate many heavy materials. Thus, neutron computed tomography is useful in obtaining important three-dimensional information about a sample's interior structure and material properties that other traditional methods cannot provide. The neutron computed tomography system at the Pennsylvania State University's Radiation Science and Engineering Center is being utilized to develop a water quantification technique for investigation of water distribution in fuel cells under normal conditions. A hollow aluminum cylinder test sample filled with a known volume of water was constructed for purposes of testing the quantification technique. Transmission images of the test sample at different angles were easily acquired through the synthesis of a dedicated image acquisition computer driving a rotary table controller and an in-house developed synchronization software package. After data acquisition, Octopus (version 8.2) and VGStudio Max (version 1.2) were used to perform cross-sectional and three-dimensional reconstructions of the sample, respectively. The initial reconstructions and water quantification results are presented.

  10. Satellite ATM Networks: Architectures and Guidelines Developed

    Science.gov (United States)

    vonDeak, Thomas C.; Yegendu, Ferit

    1999-01-01

    An important element of satellite-supported asynchronous transfer mode (ATM) networking will involve support for the routing and rerouting of active connections. Work published under the auspices of the Telecommunications Industry Association (http://www.tiaonline.org), describes basic architectures and routing protocol issues for satellite ATM (SATATM) networks. The architectures and issues identified will serve as a basis for further development of technical specifications for these SATATM networks. Three ATM network architectures for bent pipe satellites and three ATM network architectures for satellites with onboard ATM switches were developed. The architectures differ from one another in terms of required level of mobility, supported data rates, supported terrestrial interfaces, and onboard processing and switching requirements. The documentation addresses low-, middle-, and geosynchronous-Earth-orbit satellite configurations. The satellite environment may require real-time routing to support the mobility of end devices and nodes of the ATM network itself. This requires the network to be able to reroute active circuits in real time. In addition to supporting mobility, rerouting can also be used to (1) optimize network routing, (2) respond to changing quality-of-service requirements, and (3) provide a fault tolerance mechanism. Traffic management and control functions are necessary in ATM to ensure that the quality-of-service requirements associated with each connection are not violated and also to provide flow and congestion control functions. Functions related to traffic management were identified and described. Most of these traffic management functions will be supported by on-ground ATM switches, but in a hybrid terrestrial-satellite ATM network, some of the traffic management functions may have to be supported by the onboard satellite ATM switch. Future work is planned to examine the tradeoffs of placing traffic management functions onboard a satellite as

  11. Temperature dependence of postmortem MR quantification for soft tissue discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Zech, Wolf-Dieter; Schwendener, Nicole; Jackowski, Christian [University of Bern, From the Institute of Forensic Medicine, Bern (Switzerland); Persson, Anders; Warntjes, Marcel J. [University of Linkoeping, The Center for Medical Image Science and Visualization (CMIV), Linkoeping (Sweden)

    2015-08-15

    To investigate and correct the temperature dependence of postmortem MR quantification used for soft tissue characterization and differentiation in thoraco-abdominal organs. Thirty-five postmortem short axis cardiac 3-T MR examinations were quantified using a quantification sequence. Liver, spleen, left ventricular myocardium, pectoralis muscle and subcutaneous fat were analysed in cardiac short axis images to obtain mean T1, T2 and PD tissue values. The core body temperature was measured using a rectally inserted thermometer. The tissue-specific quantitative values were related to the body core temperature. Equations to correct for temperature differences were generated. In a 3D plot comprising the combined data of T1, T2 and PD, different organs/tissues could be well differentiated from each other. The quantitative values were influenced by the temperature. T1 in particular exhibited strong temperature dependence. The correction of quantitative values to a temperature of 37 C resulted in better tissue discrimination. Postmortem MR quantification is feasible for soft tissue discrimination and characterization of thoraco-abdominal organs. This provides a base for computer-aided diagnosis and detection of tissue lesions. The temperature dependence of the T1 values challenges postmortem MR quantification. Equations to correct for the temperature dependence are provided. (orig.)

  12. Temperature dependence of postmortem MR quantification for soft tissue discrimination

    International Nuclear Information System (INIS)

    Zech, Wolf-Dieter; Schwendener, Nicole; Jackowski, Christian; Persson, Anders; Warntjes, Marcel J.

    2015-01-01

    To investigate and correct the temperature dependence of postmortem MR quantification used for soft tissue characterization and differentiation in thoraco-abdominal organs. Thirty-five postmortem short axis cardiac 3-T MR examinations were quantified using a quantification sequence. Liver, spleen, left ventricular myocardium, pectoralis muscle and subcutaneous fat were analysed in cardiac short axis images to obtain mean T1, T2 and PD tissue values. The core body temperature was measured using a rectally inserted thermometer. The tissue-specific quantitative values were related to the body core temperature. Equations to correct for temperature differences were generated. In a 3D plot comprising the combined data of T1, T2 and PD, different organs/tissues could be well differentiated from each other. The quantitative values were influenced by the temperature. T1 in particular exhibited strong temperature dependence. The correction of quantitative values to a temperature of 37 C resulted in better tissue discrimination. Postmortem MR quantification is feasible for soft tissue discrimination and characterization of thoraco-abdominal organs. This provides a base for computer-aided diagnosis and detection of tissue lesions. The temperature dependence of the T1 values challenges postmortem MR quantification. Equations to correct for the temperature dependence are provided. (orig.)

  13. Quantification is Neither Necessary Nor Sufficient for Measurement

    International Nuclear Information System (INIS)

    Mari, Luca; Maul, Andrew; Torres Irribarra, David; Wilson, Mark

    2013-01-01

    Being an infrastructural, widespread activity, measurement is laden with stereotypes. Some of these concern the role of measurement in the relation between quality and quantity. In particular, it is sometimes argued or assumed that quantification is necessary for measurement; it is also sometimes argued or assumed that quantification is sufficient for or synonymous with measurement. To assess the validity of these positions the concepts of measurement and quantitative evaluation should be independently defined and their relationship analyzed. We contend that the defining characteristic of measurement should be the structure of the process, not a feature of its results. Under this perspective, quantitative evaluation is neither sufficient nor necessary for measurement

  14. Detection and quantification of proteins and cells by use of elemental mass spectrometry: progress and challenges.

    Science.gov (United States)

    Yan, Xiaowen; Yang, Limin; Wang, Qiuquan

    2013-07-01

    Much progress has been made in identification of the proteins in proteomes, and quantification of these proteins has attracted much interest. In addition to popular tandem mass spectrometric methods based on soft ionization, inductively coupled plasma mass spectrometry (ICPMS), a typical example of mass spectrometry based on hard ionization, usually used for analysis of elements, has unique advantages in absolute quantification of proteins by determination of an element with a definite stoichiometry in a protein or attached to the protein. In this Trends article, we briefly describe state-of-the-art ICPMS-based methods for quantification of proteins, emphasizing protein-labeling and element-tagging strategies developed on the basis of chemically selective reactions and/or biospecific interactions. Recent progress from protein to cell quantification by use of ICPMS is also discussed, and the possibilities and challenges of ICPMS-based protein quantification for universal, selective, or targeted quantification of proteins and cells in a biological sample are also discussed critically. We believe ICPMS-based protein quantification will become ever more important in targeted quantitative proteomics and bioanalysis in the near future.

  15. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  16. Design requirements of communication architecture of SMART safety system

    International Nuclear Information System (INIS)

    Park, H. Y.; Kim, D. H.; Sin, Y. C.; Lee, J. Y.

    2001-01-01

    To develop the communication network architecture of safety system of SMART, the evaluation elements for reliability and performance factors are extracted from commercial networks and classified the required-level by importance. A predictable determinacy, status and fixed based architecture, separation and isolation from other systems, high reliability, verification and validation are introduced as the essential requirements of safety system communication network. Based on the suggested requirements, optical cable, star topology, synchronous transmission, point-to-point physical link, connection-oriented logical link, MAC (medium access control) with fixed allocation are selected as the design elements. The proposed architecture will be applied as basic communication network architecture of SMART safety system

  17. COMPLEX NETWORK SIMULATION OF FOREST NETWORK SPATIAL PATTERN IN PEARL RIVER DELTA

    Directory of Open Access Journals (Sweden)

    Y. Zeng

    2017-09-01

    Full Text Available Forest network-construction uses for the method and model with the scale-free features of complex network theory based on random graph theory and dynamic network nodes which show a power-law distribution phenomenon. The model is suitable for ecological disturbance by larger ecological landscape Pearl River Delta consistent recovery. Remote sensing and GIS spatial data are available through the latest forest patches. A standard scale-free network node distribution model calculates the area of forest network’s power-law distribution parameter value size; The recent existing forest polygons which are defined as nodes can compute the network nodes decaying index value of the network’s degree distribution. The parameters of forest network are picked up then make a spatial transition to GIS real world models. Hence the connection is automatically generated by minimizing the ecological corridor by the least cost rule between the near nodes. Based on scale-free network node distribution requirements, select the number compared with less, a huge point of aggregation as a future forest planning network’s main node, and put them with the existing node sequence comparison. By this theory, the forest ecological projects in the past avoid being fragmented, scattered disorderly phenomena. The previous regular forest networks can be reduced the required forest planting costs by this method. For ecological restoration of tropical and subtropical in south China areas, it will provide an effective method for the forest entering city project guidance and demonstration with other ecological networks (water, climate network, etc. for networking a standard and base datum.

  18. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    Science.gov (United States)

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  19. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  20. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  1. 1H NMR quantification in very dilute toxin solutions: application to anatoxin-a analysis.

    Science.gov (United States)

    Dagnino, Denise; Schripsema, Jan

    2005-08-01

    A complete procedure is described for the extraction, detection and quantification of anatoxin-a in biological samples. Anatoxin-a is extracted from biomass by a routine acid base extraction. The extract is analysed by GC-MS, without the need of derivatization, with a detection limit of 0.5 ng. A method was developed for the accurate quantification of anatoxin-a in the standard solution to be used for the calibration of the GC analysis. 1H NMR allowed the accurate quantification of microgram quantities of anatoxin-a. The accurate quantification of compounds in standard solutions is rarely discussed, but for compounds like anatoxin-a (toxins with prices in the range of a million dollar a gram), of which generally only milligram quantities or less are available, this factor in the quantitative analysis is certainly not trivial. The method that was developed can easily be adapted for the accurate quantification of other toxins in very dilute solutions.

  2. MPLS for metropolitan area networks

    CERN Document Server

    Tan, Nam-Kee

    2004-01-01

    METROPOLITAN AREA NETWORKS AND MPLSRequirements of Metropolitan Area Network ServicesMetropolitan Area Network OverviewThe Bandwidth DemandThe Metro Service Provider's Business ApproachesThe Emerging Metro Customer Expectations and NeedsSome Prevailing Metro Service OpportunitiesService Aspects and RequirementsRoles of MPLS in Metropolitan Area NetworksMPLS PrimerMPLS ApplicationsTRAFFIC ENGINEERING ASPECTS OF METROPOLITAN AREA NETWORKSTraffic Engineering ConceptsNetwork CongestionHyper Aggregation ProblemEasing CongestionNetwork ControlTactical versus Strategic Traffic EngineeringIP/ATM Overl

  3. Better sales networks.

    Science.gov (United States)

    Ustüner, Tuba; Godes, David

    2006-01-01

    Anyone in sales will tell you that social networks are critical. The more contacts you have, the more leads you'll generate, and, ultimately, the more sales you'll make. But that's a vast oversimplification. Different configurations of networks produce different results, and the salesperson who develops a nuanced understanding of social networks will outshine competitors. The salesperson's job changes over the course of the selling process. Different abilities are required in each stage of the sale: identifying prospects, gaining buy-in from potential customers, creating solutions, and closing the deal. Success in the first stage, for instance, depends on the salesperson acquiring precise and timely information about opportunities from contacts in the marketplace. Closing the deal requires the salesperson to mobilize contacts from prior sales to act as references. Managers often view sales networks only in terms of direct contacts. But someone who knows lots of people doesn't necessarily have an effective network because networks often pay off most handsomely through indirect contacts. Moreover, the density of the connections in a network is important. Do a salesperson's contacts know all the same people, or are their associates widely dispersed? Sparse networks are better, for example, at generating unique information. Managers can use three levers--sales force structure, compensation, and skills development--to encourage salespeople to adopt a network-based view and make the best possible use of social webs. For example, the sales force can be restructured to decouple lead generation from other tasks because some people are very good at building diverse ties but not so good at maintaining other kinds of networks. Companies that take steps of this kind to help their sales teams build better networks will reap tremendous advantages.

  4. Shared protection based virtual network mapping in space division multiplexing optical networks

    Science.gov (United States)

    Zhang, Huibin; Wang, Wei; Zhao, Yongli; Zhang, Jie

    2018-05-01

    Space Division Multiplexing (SDM) has been introduced to improve the capacity of optical networks. In SDM optical networks, there are multiple cores/modes in each fiber link, and spectrum resources are multiplexed in both frequency and core/modes dimensions. Enabled by network virtualization technology, one SDM optical network substrate can be shared by several virtual networks operators. Similar with point-to-point connection services, virtual networks (VN) also need certain survivability to guard against network failures. Based on customers' heterogeneous requirements on the survivability of their virtual networks, this paper studies the shared protection based VN mapping problem and proposes a Minimum Free Frequency Slots (MFFS) mapping algorithm to improve spectrum efficiency. Simulation results show that the proposed algorithm can optimize SDM optical networks significantly in terms of blocking probability and spectrum utilization.

  5. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  6. In-network adaptation of SHVC video in software-defined networks

    Science.gov (United States)

    Awobuluyi, Olatunde; Nightingale, James; Wang, Qi; Alcaraz Calero, Jose Maria; Grecos, Christos

    2016-04-01

    Software Defined Networks (SDN), when combined with Network Function Virtualization (NFV) represents a paradigm shift in how future networks will behave and be managed. SDN's are expected to provide the underpinning technologies for future innovations such as 5G mobile networks and the Internet of Everything. The SDN architecture offers features that facilitate an abstracted and centralized global network view in which packet forwarding or dropping decisions are based on application flows. Software Defined Networks facilitate a wide range of network management tasks, including the adaptation of real-time video streams as they traverse the network. SHVC, the scalable extension to the recent H.265 standard is a new video encoding standard that supports ultra-high definition video streams with spatial resolutions of up to 7680×4320 and frame rates of 60fps or more. The massive increase in bandwidth required to deliver these U-HD video streams dwarfs the bandwidth requirements of current high definition (HD) video. Such large bandwidth increases pose very significant challenges for network operators. In this paper we go substantially beyond the limited number of existing implementations and proposals for video streaming in SDN's all of which have primarily focused on traffic engineering solutions such as load balancing. By implementing and empirically evaluating an SDN enabled Media Adaptation Network Entity (MANE) we provide a valuable empirical insight into the benefits and limitations of SDN enabled video adaptation for real time video applications. The SDN-MANE is the video adaptation component of our Video Quality Assurance Manager (VQAM) SDN control plane application, which also includes an SDN monitoring component to acquire network metrics and a decision making engine using algorithms to determine the optimum adaptation strategy for any real time video application flow given the current network conditions. Our proposed VQAM application has been implemented and

  7. Requirement analysis and architecture of data communication system for integral reactor

    International Nuclear Information System (INIS)

    Jeong, K. I.; Kwon, H. J.; Park, J. H.; Park, H. Y.; Koo, I. S.

    2005-05-01

    When digitalizing the Instrumentation and Control(I and C) systems in Nuclear Power Plants(NPP), a communication network is required for exchanging the digitalized data between I and C equipments in a NPP. A requirements analysis and an analysis of design elements and techniques are required for the design of a communication network. Through the requirements analysis of the code and regulation documents such as NUREG/CR-6082, section 7.9 of NUREG 0800 , IEEE Standard 7-4.3.2 and IEEE Standard 603, the extracted requirements can be used as a design basis and design concept for a detailed design of a communication network in the I and C system of an integral reactor. Design elements and techniques such as a physical topology, protocol transmission media and interconnection device should be considered for designing a communication network. Each design element and technique should be analyzed and evaluated as a portion of the integrated communication network design. In this report, the basic design requirements related to the design of communication network are investigated by using the code and regulation documents and an analysis of the design elements and techniques is performed. Based on these investigation and analysis, the overall architecture including the safety communication network and the non-safety communication network is proposed for an integral reactor

  8. Requirements management: A CSR's perspective

    Science.gov (United States)

    Thompson, Joanie

    1991-01-01

    The following subject areas are covered: customer service overview of network service request processing; Customer Service Representative (CSR) responsibility matrix; extract from a sample Memorandum of Understanding; Network Service Request Form and its instructions sample notification of receipt; and requirements management in the NASA Science Internet.

  9. A quantitative method for groundwater surveillance monitoring network design at the Hanford Site

    International Nuclear Information System (INIS)

    Meyer, P.D.

    1993-12-01

    As part of the Environmental Surveillance Program at the Hanford Site, mandated by the US Department of Energy, hundreds of groundwater wells are sampled each year, with each sample typically analyzed for a variety of constituents. The groundwater sampling program must satisfy several broad objectives. These objectives include an integrated assessment of the condition of groundwater and the identification and quantification of existing, emerging, or potential groundwater problems. Several quantitative network desip objectives are proposed and a mathematical optimization model is developed from these objectives. The model attempts to find minimum cost network alternatives that maximize the amount of information generated by the network. Information is measured both by the rats of change with respect to time of the contaminant concentration and the uncertainty in contaminant concentration. In an application to tritium monitoring at the Hanford Site, both information measures were derived from historical data using time series analysis

  10. Absolute Quantification of Toxicological Biomarkers via Mass Spectrometry.

    Science.gov (United States)

    Lau, Thomas Y K; Collins, Ben C; Stone, Peter; Tang, Ning; Gallagher, William M; Pennington, Stephen R

    2017-01-01

    With the advent of "-omics" technologies there has been an explosion of data generation in the field of toxicology, as well as many others. As new candidate biomarkers of toxicity are being regularly discovered, the next challenge is to validate these observations in a targeted manner. Traditionally, these validation experiments have been conducted using antibody-based technologies such as Western blotting, ELISA, and immunohistochemistry. However, this often produces a significant bottleneck as the time, cost, and development of successful antibodies are often far outpaced by the generation of targets of interest. In response to this, there recently have been several developments in the use of triple quadrupole (QQQ) mass spectrometry (MS) as a platform to provide quantification of proteins. This technology does not require antibodies; it is typically less expensive and quicker to develop assays and has the opportunity for more accessible multiplexing. The speed of these experiments combined with their flexibility and ability to multiplex assays makes the technique a valuable strategy to validate biomarker discovery.

  11. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    Science.gov (United States)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  12. Experimental Study on OSNR Requirements for Spectrum-Flexible Optical Networks

    DEFF Research Database (Denmark)

    Borkowski, Robert; Karinou, Fotini; Angelou, Marianna

    2012-01-01

    on adaptive allocation of superchannels in spectrum-flexible heterogeneous optical network. In total, three superchannels were transmitted. Two 5-subcarrier 14-GHz-spaced, 14 Gbaud, polarization-division-multiplexed (PDM) quadrature-phase-shift-keyed (QPSK) superchannels were separated by a spectral gap...... to maintain a 1×10−3 bit error rate of the central BOI subcarrier. The results provide a rule of thumb that can be exploited in resource allocation mechanisms of future spectrum-flexible optical networks.......The flexibility and elasticity of the spectrum is an important topic today. As the capacity of deployed fiber-optic systems is becoming scarce, it is vital to shift towards solutions ensuring higher spectral efficiency. Working in this direction, we report an extensive experimental study...

  13. Comparison of Colorimetric Assays with Quantitative Amino Acid Analysis for Protein Quantification of Generalized Modules for Membrane Antigens (GMMA)

    OpenAIRE

    Rossi, Omar; Maggiore, Luana; Necchi, Francesca; Koeberling, Oliver; MacLennan, Calman A.; Saul, Allan; Gerke, Christiane

    2014-01-01

    Genetically induced outer membrane particles from Gram-negative bacteria, called Generalized Modules for Membrane Antigens (GMMA), are being investigated as vaccines. Rapid methods are required for estimating the protein content for in-process assays during production. Since GMMA are complex biological structures containing lipid and polysaccharide as well as protein, protein determinations are not necessarily straightforward. We compared protein quantification by Bradford, Lowry, and Non-Int...

  14. Linkage reliability in local area network

    International Nuclear Information System (INIS)

    Buissson, J.; Sanchis, P.

    1984-11-01

    The local area networks for industrial applications e.g. in nuclear power plants counterparts intended for office use that they are required to meet more stringent requirements in terms of reliability, security and availability. The designers of such networks take full advantage of the office-oriented developments (more specifically the integrated circuits) and increase their performance capabilities with respect to the industrial requirements [fr

  15. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... alternative for the quantification of the disease' syndromes in regards to this crop. The result of these ..... parison of treatments such as cultivars or control measures and ..... Vascular discoloration and stem necrosis. 2.

  16. Lamb Wave Damage Quantification Using GA-Based LS-SVM

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-06-01

    Full Text Available Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM and a genetic algorithm (GA. Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.

  17. Lamb Wave Damage Quantification Using GA-Based LS-SVM.

    Science.gov (United States)

    Sun, Fuqiang; Wang, Ning; He, Jingjing; Guan, Xuefei; Yang, Jinsong

    2017-06-12

    Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE) for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM) and a genetic algorithm (GA). Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.

  18. [DNA quantification of blood samples pre-treated with pyramidon].

    Science.gov (United States)

    Zhu, Chuan-Hong; Zheng, Dao-Li; Ni, Rao-Zhi; Wang, Hai-Sheng; Ning, Ping; Fang, Hui; Liu, Yan

    2014-06-01

    To study DNA quantification and STR typing of samples pre-treated with pyramidon. The blood samples of ten unrelated individuals were anticoagulated in EDTA. The blood stains were made on the filter paper. The experimental groups were divided into six groups in accordance with the storage time, 30 min, 1 h, 3 h, 6 h, 12 h and 24h after pre-treated with pyramidon. DNA was extracted by three methods: magnetic bead-based extraction, QIAcube DNA purification method and Chelex-100 method. The quantification of DNA was made by fluorescent quantitative PCR. STR typing was detected by PCR-STR fluorescent technology. In the same DNA extraction method, the sample DNA decreased gradually with times after pre-treatment with pyramidon. In the same storage time, the DNA quantification in different extraction methods had significant differences. Sixteen loci DNA typing were detected in 90.56% of samples. Pyramidon pre-treatment could cause DNA degradation, but effective STR typing can be achieved within 24 h. The magnetic bead-based extraction is the best method for STR profiling and DNA extraction.

  19. Optical CDMA components requirements

    Science.gov (United States)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  20. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    Directory of Open Access Journals (Sweden)

    Kevin McCluskey

    2017-08-01

    Full Text Available The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD. The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol.

  1. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  2. Absolute and direct microRNA quantification using DNA-gold nanoparticle probes.

    Science.gov (United States)

    Degliangeli, Federica; Kshirsagar, Prakash; Brunetti, Virgilio; Pompa, Pier Paolo; Fiammengo, Roberto

    2014-02-12

    DNA-gold nanoparticle probes are implemented in a simple strategy for direct microRNA (miRNA) quantification. Fluorescently labeled DNA-probe strands are immobilized on PEGylated gold nanoparticles (AuNPs). In the presence of target miRNA, DNA-RNA heteroduplexes are formed and become substrate for the endonuclease DSN (duplex-specific nuclease). Enzymatic hydrolysis of the DNA strands yields a fluorescence signal due to diffusion of the fluorophores away from the gold surface. We show that the molecular design of our DNA-AuNP probes, with the DNA strands immobilized on top of the PEG-based passivation layer, results in nearly unaltered enzymatic activity toward immobilized heteroduplexes compared to substrates free in solution. The assay, developed in a real-time format, allows absolute quantification of as little as 0.2 fmol of miR-203. We also show the application of the assay for direct quantification of cancer-related miR-203 and miR-21 in samples of extracted total RNA from cell cultures. The possibility of direct and absolute quantification may significantly advance the use of microRNAs as biomarkers in the clinical praxis.

  3. Activity quantification of phantom using dual-head SPECT with two-view planar image

    International Nuclear Information System (INIS)

    Guo Leiming; Chen Tao; Sun Xiaoguang; Huang Gang

    2005-01-01

    The absorbed radiation dose from internally deposited radionuclide is a major factor in assessing risk and therapeutic utility in nuclear medicine diagnosis or treatment. The quantification of absolute activity in vivo is necessary procedure of estimating the absorbed dose of organ or tissue. To understand accuracy in the determination of organ activity, the experiments on 99 Tc m activity quantification were made for a body phantom using dual-heat SPECT with the two-view counting technique. Accuracy in the activity quantification is credible and is not affected by depth of source organ in vivo. When diameter of the radiation source is ≤2 cm, the most accurate activity quantification result can be obtained on the basis of establishing the system calibration factor and transmission factor. The use of Buijs's method is preferable, especially at very low source-to-background activity concentration rations. (authors)

  4. Single-shot secure quantum network coding on butterfly network with free public communication

    Science.gov (United States)

    Owari, Masaki; Kato, Go; Hayashi, Masahito

    2018-01-01

    Quantum network coding on the butterfly network has been studied as a typical example of quantum multiple cast network. We propose a secure quantum network code for the butterfly network with free public classical communication in the multiple unicast setting under restricted eavesdropper’s power. This protocol certainly transmits quantum states when there is no attack. We also show the secrecy with shared randomness as additional resource when the eavesdropper wiretaps one of the channels in the butterfly network and also derives the information sending through public classical communication. Our protocol does not require verification process, which ensures single-shot security.

  5. Towards a Diagnostic Instrument to Identify Improvement Opportunities for Quality Controlled Logistics in Agrifood Supply Chain Networks

    Directory of Open Access Journals (Sweden)

    Jack G.A.J. van der Vorst

    2011-10-01

    Full Text Available  Western-European consumers have become not only more demanding on product availability in retail outlets but also on other food attributes such as quality, integrity, and safety. When (redesigning food supply-chain networks, from a logistics point of view, one has to consider these demands next to traditional efficiency and responsiveness requirements. The concept ‘quality controlled logistics’ (QCL hypothesizes that if product quality in each step of the supply chain can be predicted in advance, goods flows can be controlled in a pro-active manner and better chain designs can be established resulting in higher product availability, constant quality, and less product losses. The paper discusses opportunities of using real-time product quality information for improvement of the design and management of ‘AgriFood Supply Chain Networks’, and presents a preliminary diagnostic instrument for assessment of ‘critical quality’ and ‘logistics control’ points in the supply chain network. Results of a tomato-chain case illustrate the added value of the QCL concept for identifying improvement opportunities in the supply chain as to increase both product availability and quality. Future research aims for the further development of the diagnostic instrument and the quantification of costs and benefits of QCL scenarios.

  6. Improved Strategies and Optimization of Calibration Models for Real-time PCR Absolute Quantification

    Science.gov (United States)

    Real-time PCR absolute quantification applications rely on the use of standard curves to make estimates of DNA target concentrations in unknown samples. Traditional absolute quantification approaches dictate that a standard curve must accompany each experimental run. However, t...

  7. Quantification of uranyl in presence of citric acid

    International Nuclear Information System (INIS)

    Garcia G, N.; Barrera D, C.E.; Ordonez R, E.

    2007-01-01

    To determine the influence that has the organic matter of the soil on the uranyl sorption on some solids is necessary to have a detection technique and quantification of uranyl that it is reliable and sufficiently quick in the obtaining of results. For that in this work, it intends to carry out the uranyl quantification in presence of citric acid modifying the Fluorescence induced by UV-Vis radiation technique. Since the uranyl ion is very sensitive to the medium that contains it, (speciation, pH, ionic forces, etc.) it was necessary to develop an analysis technique that stands out the fluorescence of uranyl ion avoiding the out one that produce the organic acids. (Author)

  8. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    Science.gov (United States)

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures

  9. Comparison of colorimetric assays with quantitative amino acid analysis for protein quantification of Generalized Modules for Membrane Antigens (GMMA).

    Science.gov (United States)

    Rossi, Omar; Maggiore, Luana; Necchi, Francesca; Koeberling, Oliver; MacLennan, Calman A; Saul, Allan; Gerke, Christiane

    2015-01-01

    Genetically induced outer membrane particles from Gram-negative bacteria, called Generalized Modules for Membrane Antigens (GMMA), are being investigated as vaccines. Rapid methods are required for estimating the protein content for in-process assays during production. Since GMMA are complex biological structures containing lipid and polysaccharide as well as protein, protein determinations are not necessarily straightforward. We compared protein quantification by Bradford, Lowry, and Non-Interfering assays using bovine serum albumin (BSA) as standard with quantitative amino acid (AA) analysis, the most accurate currently available method for protein quantification. The Lowry assay has the lowest inter- and intra-assay variation and gives the best linearity between protein amount and absorbance. In all three assays, the color yield (optical density per mass of protein) of GMMA was markedly different from that of BSA with a ratio of approximately 4 for the Bradford assay, and highly variable between different GMMA; and approximately 0.7 for the Lowry and Non-Interfering assays, highlighting the need for calibrating the standard used in the colorimetric assay against GMMA quantified by AA analysis. In terms of a combination of ease, reproducibility, and proportionality of protein measurement, and comparability between samples, the Lowry assay was superior to Bradford and Non-Interfering assays for GMMA quantification.

  10. Function-Oriented Networking and On-Demand Routing System in Network Using Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Young-Bo Sim

    2017-11-01

    Full Text Available In this paper, we proposed and developed Function-Oriented Networking (FON, a platform for network users. It has a different philosophy as opposed to technologies for network managers of Software-Defined Networking technology, OpenFlow. It is a technology that can immediately reflect the demands of the network users in the network, unlike the existing OpenFlow and Network Functions Virtualization (NFV, which do not reflect directly the needs of the network users. It allows the network user to determine the policy of the direct network, so it can be applied more precisely than the policy applied by the network manager. This is expected to increase the satisfaction of the service users when the network users try to provide new services. We developed FON function that performs on-demand routing for Low-Delay Required service. We analyzed the characteristics of the Ant Colony Optimization (ACO algorithm and found that the algorithm is suitable for low-delay required services. It was also the first in the world to implement the routing software using ACO Algorithm in the real Ethernet network. In order to improve the routing performance, several algorithms of the ACO Algorithm have been developed to enable faster path search-routing and path recovery. The relationship between the network performance index and the ACO routing parameters is derived, and the results are compared and analyzed. Through this, it was possible to develop the ACO algorithm.

  11. On the Confounding Effect of Temperature on Chemical Shift-Encoded Fat Quantification

    Science.gov (United States)

    Hernando, Diego; Sharma, Samir D.; Kramer, Harald; Reeder, Scott B.

    2014-01-01

    Purpose To characterize the confounding effect of temperature on chemical shift-encoded (CSE) fat quantification. Methods The proton resonance frequency of water, unlike triglycerides, depends on temperature. This leads to a temperature dependence of the spectral models of fat (relative to water) that are commonly used by CSE-MRI methods. Simulation analysis was performed for 1.5 Tesla CSE fat–water signals at various temperatures and echo time combinations. Oil–water phantoms were constructed and scanned at temperatures between 0 and 40°C using spectroscopy and CSE imaging at three echo time combinations. An explanted human liver, rejected for transplantation due to steatosis, was scanned using spectroscopy and CSE imaging. Fat–water reconstructions were performed using four different techniques: magnitude and complex fitting, with standard or temperature-corrected signal modeling. Results In all experiments, magnitude fitting with standard signal modeling resulted in large fat quantification errors. Errors were largest for echo time combinations near TEinit ≈ 1.3 ms, ΔTE ≈ 2.2 ms. Errors in fat quantification caused by temperature-related frequency shifts were smaller with complex fitting, and were avoided using a temperature-corrected signal model. Conclusion Temperature is a confounding factor for fat quantification. If not accounted for, it can result in large errors in fat quantifications in phantom and ex vivo acquisitions. PMID:24123362

  12. An approach to efficient mobility management in intelligent networks

    Science.gov (United States)

    Murthy, K. M. S.

    1995-01-01

    Providing personal communication systems supporting full mobility require intelligent networks for tracking mobile users and facilitating outgoing and incoming calls over different physical and network environments. In realizing the intelligent network functionalities, databases play a major role. Currently proposed network architectures envision using the SS7-based signaling network for linking these DB's and also for interconnecting DB's with switches. If the network has to support ubiquitous, seamless mobile services, then it has to support additionally mobile application parts, viz., mobile origination calls, mobile destination calls, mobile location updates and inter-switch handovers. These functions will generate significant amount of data and require them to be transferred between databases (HLR, VLR) and switches (MSC's) very efficiently. In the future, the users (fixed or mobile) may use and communicate with sophisticated CPE's (e.g. multimedia, multipoint and multisession calls) which may require complex signaling functions. This will generate volumness service handling data and require efficient transfer of these message between databases and switches. Consequently, the network providers would be able to add new services and capabilities to their networks incrementally, quickly and cost-effectively.

  13. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  14. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  15. Networking at NASA. Johnson Space Center

    Science.gov (United States)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  16. Quantification of taurine in energy drinks using ¹H NMR.

    Science.gov (United States)

    Hohmann, Monika; Felbinger, Christine; Christoph, Norbert; Wachter, Helmut; Wiest, Johannes; Holzgrabe, Ulrike

    2014-05-01

    The consumption of so called energy drinks is increasing, especially among adolescents. These beverages commonly contain considerable amounts of the amino sulfonic acid taurine, which is related to a magnitude of various physiological effects. The customary method to control the legal limit of taurine in energy drinks is LC-UV/vis with postcolumn derivatization using ninhydrin. In this paper we describe the quantification of taurine in energy drinks by (1)H NMR as an alternative to existing methods of quantification. Variation of pH values revealed the separation of a distinct taurine signal in (1)H NMR spectra, which was applied for integration and quantification. Quantification was performed using external calibration (R(2)>0.9999; linearity verified by Mandel's fitting test with a 95% confidence level) and PULCON. Taurine concentrations in 20 different energy drinks were analyzed by both using (1)H NMR and LC-UV/vis. The deviation between (1)H NMR and LC-UV/vis results was always below the expanded measurement uncertainty of 12.2% for the LC-UV/vis method (95% confidence level) and at worst 10.4%. Due to the high accordance to LC-UV/vis data and adequate recovery rates (ranging between 97.1% and 108.2%), (1)H NMR measurement presents a suitable method to quantify taurine in energy drinks. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Initial water quantification results using neutron computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.K. [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States)], E-mail: axh174@psu.edu; Shi, L.; Brenizer, J.S.; Mench, M.M. [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States)

    2009-06-21

    Neutron computed tomography is an important imaging tool in the field of non-destructive testing and in fundamental research for many engineering applications. Contrary to X-rays, neutrons can be attenuated by some light materials, such as hydrogen, but can penetrate many heavy materials. Thus, neutron computed tomography is useful in obtaining important three-dimensional information about a sample's interior structure and material properties that other traditional methods cannot provide. The neutron computed tomography system at Pennsylvania State University's Radiation Science and Engineering Center is being utilized to develop a water quantification technique for investigation of water distribution in fuel cells under normal conditions. A hollow aluminum cylinder test sample filled with a known volume of water was constructed for purposes of testing the quantification technique. Transmission images of the test sample at different angles were easily acquired through the synthesis of a dedicated image acquisition computer driving a rotary table controller and an in-house developed synchronization software package. After data acquisition, Octopus (version 8.2) and VGStudio Max (version 1.2) were used to perform cross-sectional and three-dimensional reconstructions of the sample, respectively. The initial reconstructions and water quantification results are presented.

  18. Decision peptide-driven: a free software tool for accurate protein quantification using gel electrophoresis and matrix assisted laser desorption ionization time of flight mass spectrometry.

    Science.gov (United States)

    Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L

    2010-09-15

    The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  19. AN/VRC 118 Mid-Tier Networking Vehicular Radio (MNVR) and Joint Enterprise Network Manager (JENM) Early Fielding Report

    Science.gov (United States)

    2017-01-18

    requirements. The Army intends to conduct the MNVR Initial Operational Test and Evaluation ( IOT &E) with the new radio in FY21 to support a fielding decision...improve the commander’s ability to conduct mission command over the MNVR WNW mid-tier network. Network Usage During the 2016 MNVR Operational...its reliability requirement in a loaded network simulating full brigade usage . Based on the results of developmental test, the Army made

  20. Pinning control of complex networked systems synchronization, consensus and flocking of networked systems via pinning

    CERN Document Server

    Su, Housheng

    2013-01-01

    Synchronization, consensus and flocking are ubiquitous requirements in networked systems. Pinning Control of Complex Networked Systems investigates these requirements by using the pinning control strategy, which aims to control the whole dynamical network with huge numbers of nodes by imposing controllers for only a fraction of the nodes. As the direct control of every node in a dynamical network with huge numbers of nodes might be impossible or unnecessary, it’s then very important to use the pinning control strategy for the synchronization of complex dynamical networks. The research on pinning control strategy in consensus and flocking of multi-agent systems can not only help us to better understand the mechanisms of natural collective phenomena, but also benefit applications in mobile sensor/robot networks. This book offers a valuable resource for researchers and engineers working in the fields of control theory and control engineering.   Housheng Su is an Associate Professor at the Department of Contro...

  1. Virtualized Network Control (VNC)

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Thomas [Univ. of Southern California, Los Angeles, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-01-31

    The focus of this project was on the development of a "Network Service Plane" as an abstraction model for the control and provisioning of multi-layer networks. The primary motivation for this work were the requirements of next generation networked applications which will need to access advanced networking as a first class resource at the same level as compute and storage resources. A new class of "Intelligent Network Services" were defined in order to facilitate the integration of advanced network services into application specific workflows. This new class of network services are intended to enable real-time interaction between the application co-scheduling algorithms and the network for the purposes of workflow planning, real-time resource availability identification, scheduling, and provisioning actions.

  2. eAMI: A Qualitative Quantification of Periodic Breathing Based on Amplitude of Oscillations

    Science.gov (United States)

    Fernandez Tellez, Helio; Pattyn, Nathalie; Mairesse, Olivier; Dolenc-Groselj, Leja; Eiken, Ola; Mekjavic, Igor B.; Migeotte, P. F.; Macdonald-Nethercott, Eoin; Meeusen, Romain; Neyt, Xavier

    2015-01-01

    Study Objectives: Periodic breathing is sleep disordered breathing characterized by instability in the respiratory pattern that exhibits an oscillatory behavior. Periodic breathing is associated with increased mortality, and it is observed in a variety of situations, such as acute hypoxia, chronic heart failure, and damage to respiratory centers. The standard quantification for the diagnosis of sleep related breathing disorders is the apnea-hypopnea index (AHI), which measures the proportion of apneic/hypopneic events during polysomnography. Determining the AHI is labor-intensive and requires the simultaneous recording of airflow and oxygen saturation. In this paper, we propose an automated, simple, and novel methodology for the detection and qualification of periodic breathing: the estimated amplitude modulation index (eAMI). Patients or Participants: Antarctic cohort (3,800 meters): 13 normal individuals. Clinical cohort: 39 different patients suffering from diverse sleep-related pathologies. Measurements and Results: When tested in a population with high levels of periodic breathing (Antarctic cohort), eAMI was closely correlated with AHI (r = 0.95, P Dolenc-Groselj L, Eiken O, Mekjavic IB, Migeotte PF, Macdonald-Nethercott E, Meeusen R, Neyt X. eAMI: a qualitative quantification of periodic breathing based on amplitude of oscillations. SLEEP 2015;38(3):381–389. PMID:25581914

  3. Evaluation of selected recurrence measures in discriminating pre-ictal and inter-ictal periods from epileptic EEG data

    International Nuclear Information System (INIS)

    Ngamga, Eulalie Joelle; Bialonski, Stephan; Marwan, Norbert; Kurths, Jürgen; Geier, Christian; Lehnertz, Klaus

    2016-01-01

    We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database. - Highlights: • Recurrence-based analysis of brain dynamics in human epilepsy. • Comparison of recurrence quantification and recurrence network measures. • Statistically significant precursory structures in three out of five patients. • High congruence among measures in characterizing brain dynamics.

  4. Evaluation of selected recurrence measures in discriminating pre-ictal and inter-ictal periods from epileptic EEG data

    Energy Technology Data Exchange (ETDEWEB)

    Ngamga, Eulalie Joelle [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Bialonski, Stephan [Max-Planck-Institute for the Physics of Complex Systems, Nöthnitzer Straße 38, 01187 Dresden (Germany); Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Department of Physics, Humboldt University Berlin, 12489 Berlin (Germany); Institute for Complex Systems and Mathematical Biology, University of Aberdeen, Aberdeen AB24 3UE (United Kingdom); Geier, Christian [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14–16, 53115 Bonn (Germany); Lehnertz, Klaus [Department of Epileptology, University of Bonn, Sigmund-Freud-Straße 25, 53105 Bonn (Germany); Helmholtz Institute for Radiation and Nuclear Physics, University of Bonn, Nussallee 14–16, 53115 Bonn (Germany); Interdisciplinary Center for Complex Systems, University of Bonn, Brühler Straße 7, 53175 Bonn (Germany)

    2016-04-01

    We investigate the suitability of selected measures of complexity based on recurrence quantification analysis and recurrence networks for an identification of pre-seizure states in multi-day, multi-channel, invasive electroencephalographic recordings from five epilepsy patients. We employ several statistical techniques to avoid spurious findings due to various influencing factors and due to multiple comparisons and observe precursory structures in three patients. Our findings indicate a high congruence among measures in identifying seizure precursors and emphasize the current notion of seizure generation in large-scale epileptic networks. A final judgment of the suitability for field studies, however, requires evaluation on a larger database. - Highlights: • Recurrence-based analysis of brain dynamics in human epilepsy. • Comparison of recurrence quantification and recurrence network measures. • Statistically significant precursory structures in three out of five patients. • High congruence among measures in characterizing brain dynamics.

  5. Maximizing lifetime of wireless sensor networks using genetic approach

    DEFF Research Database (Denmark)

    Wagh, Sanjeev; Prasad, Ramjee

    2014-01-01

    The wireless sensor networks are designed to install the smart network applications or network for emergency solutions, where human interaction is not possible. The nodes in wireless sensor networks have to self organize as per the users requirements through monitoring environments. As the sensor......-objective parameters are considered to solve the problem using genetic algorithm of evolutionary approach.......The wireless sensor networks are designed to install the smart network applications or network for emergency solutions, where human interaction is not possible. The nodes in wireless sensor networks have to self organize as per the users requirements through monitoring environments. As the sensor...

  6. Software Defined Networking to support IP address mobility in future LTE network

    NARCIS (Netherlands)

    Karimzadeh Motallebi Azar, Morteza; Valtulina, Luca; van den Berg, Hans Leo; Pras, Aiko; Liebsch, Marco; Taleb, Tarik

    2017-01-01

    The existing LTE network architecture dose not scale well to increasing demands due to its highly centralized and hierarchical composition. In this paper we discuss the major modifications required in the current LTE network to realize a decentralized LTE architecture. Next, we develop two IP

  7. An external standard method for quantification of human cytomegalovirus by PCR

    International Nuclear Information System (INIS)

    Rongsen, Shen; Liren, Ma; Fengqi, Zhou; Qingliang, Luo

    1997-01-01

    An external standard method for PCR quantification of HCMV was reported. [α- 32 P]dATP was used as a tracer. 32 P-labelled specific amplification product was separated by agarose gel electrophoresis. A gel piece containing the specific product band was excised and counted in a plastic scintillation counter. Distribution of [α- 32 P]dATP in the electrophoretic gel plate and effect of separation between the 32 P-labelled specific product and free [α- 32 P]dATP were observed. A standard curve for quantification of HCMV by PCR was established and detective results of quality control templets were presented. The external standard method and the electrophoresis separation effect were appraised. The results showed that the method could be used for relative quantification of HCMV. (author)

  8. Routing architecture and security for airborne networks

    Science.gov (United States)

    Deng, Hongmei; Xie, Peng; Li, Jason; Xu, Roger; Levy, Renato

    2009-05-01

    Airborne networks are envisioned to provide interconnectivity for terrestial and space networks by interconnecting highly mobile airborne platforms. A number of military applications are expected to be used by the operator, and all these applications require proper routing security support to establish correct route between communicating platforms in a timely manner. As airborne networks somewhat different from traditional wired and wireless networks (e.g., Internet, LAN, WLAN, MANET, etc), security aspects valid in these networks are not fully applicable to airborne networks. Designing an efficient security scheme to protect airborne networks is confronted with new requirements. In this paper, we first identify a candidate routing architecture, which works as an underlying structure for our proposed security scheme. And then we investigate the vulnerabilities and attack models against routing protocols in airborne networks. Based on these studies, we propose an integrated security solution to address routing security issues in airborne networks.

  9. FlowCam: Quantification and Classification of Phytoplankton by Imaging Flow Cytometry.

    Science.gov (United States)

    Poulton, Nicole J

    2016-01-01

    The ability to enumerate, classify, and determine biomass of phytoplankton from environmental samples is essential for determining ecosystem function and their role in the aquatic community and microbial food web. Traditional micro-phytoplankton quantification methods using microscopic techniques require preservation and are slow, tedious and very laborious. The availability of more automated imaging microscopy platforms has revolutionized the way particles and cells are detected within their natural environment. The ability to examine cells unaltered and without preservation is key to providing more accurate cell concentration estimates and overall phytoplankton biomass. The FlowCam(®) is an imaging cytometry tool that was originally developed for use in aquatic sciences and provides a more rapid and unbiased method for enumerating and classifying phytoplankton within diverse aquatic environments.

  10. Operational risk quantification and modelling within Romanian insurance industry

    Directory of Open Access Journals (Sweden)

    Tudor Răzvan

    2017-07-01

    Full Text Available This paper aims at covering and describing the shortcomings of various models used to quantify and model the operational risk within insurance industry with a particular focus on Romanian specific regulation: Norm 6/2015 concerning the operational risk issued by IT systems. While most of the local insurers are focusing on implementing the standard model to compute the Operational Risk solvency capital required, the local regulator has issued a local norm that requires to identify and assess the IT based operational risks from an ISO 27001 perspective. The challenges raised by the correlations assumed in the Standard model are substantially increased by this new regulation that requires only the identification and quantification of the IT operational risks. The solvency capital requirement stipulated by the implementation of Solvency II doesn’t recommend a model or formula on how to integrate the newly identified risks in the Operational Risk capital requirements. In this context we are going to assess the academic and practitioner’s understanding in what concerns: The Frequency-Severity approach, Bayesian estimation techniques, Scenario Analysis and Risk Accounting based on risk units, and how they could support the modelling of operational risk that are IT based. Developing an internal model only for the operational risk capital requirement proved to be, so far, costly and not necessarily beneficial for the local insurers. As the IT component will play a key role in the future of the insurance industry, the result of this analysis will provide a specific approach in operational risk modelling that can be implemented in the context of Solvency II, in a particular situation when (internal or external operational risk databases are scarce or not available.

  11. How can social network analysis contribute to social behavior research in applied ethology?

    Science.gov (United States)

    Makagon, Maja M; McCowan, Brenda; Mench, Joy A

    2012-05-01

    Social network analysis is increasingly used by behavioral ecologists and primatologists to describe the patterns and quality of interactions among individuals. We provide an overview of this methodology, with examples illustrating how it can be used to study social behavior in applied contexts. Like most kinds of social interaction analyses, social network analysis provides information about direct relationships (e.g. dominant-subordinate relationships). However, it also generates a more global model of social organization that determines how individual patterns of social interaction relate to individual and group characteristics. A particular strength of this approach is that it provides standardized mathematical methods for calculating metrics of sociality across levels of social organization, from the population and group levels to the individual level. At the group level these metrics can be used to track changes in social network structures over time, evaluate the effect of the environment on social network structure, or compare social structures across groups, populations or species. At the individual level, the metrics allow quantification of the heterogeneity of social experience within groups and identification of individuals who may play especially important roles in maintaining social stability or information flow throughout the network.

  12. Analysis of Network Parameters Influencing Performance of Hybrid Multimedia Networks

    Directory of Open Access Journals (Sweden)

    Dominik Kovac

    2013-10-01

    Full Text Available Multimedia networks is an emerging subject that currently attracts the attention of research and industrial communities. This environment provides new entertainment services and business opportunities merged with all well-known network services like VoIP calls or file transfers. Such a heterogeneous system has to be able satisfy all network and end-user requirements which are increasing constantly. Therefore the simulation tools enabling deep analysis in order to find the key performance indicators and factors which influence the overall quality for specific network service the most are highly needed. This paper provides a study on the network parameters like communication technology, routing protocol, QoS mechanism, etc. and their effect on the performance of hybrid multimedia network. The analysis was performed in OPNET Modeler environment and the most interesting results are discussed at the end of this paper

  13. Recent advances on failure and recovery in networks of networks

    International Nuclear Information System (INIS)

    Shekhtman, Louis M.; Danziger, Michael M.; Havlin, Shlomo

    2016-01-01

    Until recently, network science has focused on the properties of single isolated networks that do not interact or depend on other networks. However it has now been recognized that many real-networks, such as power grids, transportation systems, and communication infrastructures interact and depend on other networks. Here, we will present a review of the framework developed in recent years for studying the vulnerability and recovery of networks composed of interdependent networks. In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This is also the case when some nodes, like for example certain people, play a role in two networks, i.e. in a multiplex. Dependency relations may act recursively and can lead to cascades of failures concluding in sudden fragmentation of the system. We review the analytical solutions for the critical threshold and the giant component of a network of n interdependent networks. The general theory and behavior of interdependent networks has many novel features that are not present in classical network theory. Interdependent networks embedded in space are significantly more vulnerable compared to non-embedded networks. In particular, small localized attacks may lead to cascading failures and catastrophic consequences. Finally, when recovery of components is possible, global spontaneous recovery of the networks and hysteresis phenomena occur. The theory developed for this process points to an optimal repairing strategy for a network of networks. Understanding realistic effects present in networks of networks is required in order to move towards determining system vulnerability.

  14. In situ Biofilm Quantification in Bioelectrochemical Systems by using Optical Coherence Tomography.

    Science.gov (United States)

    Molenaar, Sam D; Sleutels, Tom; Pereira, Joao; Iorio, Matteo; Borsje, Casper; Zamudio, Julian A; Fabregat-Santiago, Francisco; Buisman, Cees J N; Ter Heijne, Annemiek

    2018-04-25

    Detailed studies of microbial growth in bioelectrochemical systems (BESs) are required for their suitable design and operation. Here, we report the use of optical coherence tomography (OCT) as a tool for in situ and noninvasive quantification of biofilm growth on electrodes (bioanodes). An experimental platform is designed and described in which transparent electrodes are used to allow real-time, 3D biofilm imaging. The accuracy and precision of the developed method is assessed by relating the OCT results to well-established standards for biofilm quantification (chemical oxygen demand (COD) and total N content) and show high correspondence to these standards. Biofilm thickness observed by OCT ranged between 3 and 90 μm for experimental durations ranging from 1 to 24 days. This translated to growth yields between 38 and 42 mgCODbiomass  gCODacetate -1 at an anode potential of -0.35 V versus Ag/AgCl. Time-lapse observations of an experimental run performed in duplicate show high reproducibility in obtained microbial growth yield by the developed method. As such, we identify OCT as a powerful tool for conducting in-depth characterizations of microbial growth dynamics in BESs. Additionally, the presented platform allows concomitant application of this method with various optical and electrochemical techniques. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  15. Quantification of viable spray-dried potential probiotic lactobacilli using real-time PCR

    Directory of Open Access Journals (Sweden)

    Radulović Zorica

    2012-01-01

    Full Text Available The basic requirement for probiotic bacteria to be able to perform expected positive effects is to be alive. Therefore, appropriate quantification methods are crucial. Bacterial quantification based on nucleic acid detection is increasingly used. Spray-drying (SD is one of the possibilities to improve the survival of probiotic bacteria against negative environmental effects. The aim of this study was to investigate the survival of spray-dried Lactobacillus plantarum 564 and Lactobacillus paracasei Z-8, and to investigate the impact on some probiotic properties caused by SD of both tested strains. Besides the plate count technique, the aim was to examine the possibility of using propidium monoazide (PMA in combination with real-time polymerase chain reaction (PCR for determining spray-dried tested strains. The number of intact cells, Lb. plantarum 564 and Lb. paracasei Z-8, was determined by real-time PCR with PMA, and it was similar to the number of investigated strains obtained by the plate count method. Spray-dried Lb. plantarum 564 and Lb. paracasei Z-8 demonstrated very good probiotic ability. It may be concluded that the PMA real-time PCR determination of the viability of probiotic bacteria could complement the plate count method and SD may be a cost-effective way to produce large quantities of some probiotic cultures. [Projekat Ministarstva nauke Republike Srbije, br. 046010

  16. An Inter-Networking Mechanism with Stepwise Synchronization for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Masayuki Murata

    2011-08-01

    Full Text Available To realize the ambient information society, multiple wireless networks deployed in the region and devices carried by users are required to cooperate with each other. Since duty cycles and operational frequencies are different among networks, we need a mechanism to allow networks to efficiently exchange messages. For this purpose, we propose a novel inter-networking mechanism where two networks are synchronized with each other in a moderate manner, which we call stepwise synchronization. With our proposal, to bridge the gap between intrinsic operational frequencies, nodes near the border of networks adjust their operational frequencies in a stepwise fashion based on the pulse-coupled oscillator model as a fundamental theory of synchronization. Through simulation experiments, we show that the communication delay and the energy consumption of border nodes are reduced, which enables wireless sensor networks to communicate longer with each other.

  17. The Analysis of SARDANA HPON Networks Using the HPON Network Configurator

    Directory of Open Access Journals (Sweden)

    Rastislav Roka

    2013-01-01

    Full Text Available NG-PON systems present optical access infrastructures to support various applications of the many service providers. In the near future, we can expect NG-PON technologies with different motivations for developing of HPON networks. The HPON is a hybrid passive optical network in a way that utilizes on a physical layer both TDM and WDM multiplexing principles together. The HPON network utilizes similar or soft revised topologies as TDM-PON architectures. In this second paper, requirements for the SARDANA HPON networks are introduced. A main part of the paper is dedicated to presentation of the HPON network configurator that allows configurating and analyzing the SARDANA HPON characteristics from a viewpoint of various specific network parameters. Finally, a short introduction to the comparison of the SARDANA and SUCCESS HPON networks based on simulation results is presented.

  18. The Analysis of SUCCESS HPON Networks Using the HPON Network Configurator

    Directory of Open Access Journals (Sweden)

    Rastislav Roka

    2013-01-01

    Full Text Available NG-PON systems present optical access infrastructures to support various applications of the many service providers. In the near future, we can expect NG-PON technologies with different motivations for developing of HPON networks. The HPON is a hybrid passive optical network in a way that utilizes on a physical layer both TDM and WDM multiplexing principles together. The HPON network utilizes similar or soft revised topologies as TDM-PON architectures. In this first paper, design requirements for SUCCESS HPON networks are introduced. A main part of the paper is dedicated to presentation of the HPON network configurator that allows configurating and analyzing the SUCCESS HPON characteristics from a viewpoint of various specific network parameters. Finally, a short introduction to the comparison of the SUCCESS and SARDANA HPON networks based on simulation results is presented.

  19. NET: a new framework for the vectorization and examination of network data.

    Science.gov (United States)

    Lasser, Jana; Katifori, Eleni

    2017-01-01

    The analysis of complex networks both in general and in particular as pertaining to real biological systems has been the focus of intense scientific attention in the past and present. In this paper we introduce two tools that provide fast and efficient means for the processing and quantification of biological networks like Drosophila tracheoles or leaf venation patterns: the Network Extraction Tool ( NET ) to extract data and the Graph-edit-GUI ( GeGUI ) to visualize and modify networks. NET is especially designed for high-throughput semi-automated analysis of biological datasets containing digital images of networks. The framework starts with the segmentation of the image and then proceeds to vectorization using methodologies from optical character recognition. After a series of steps to clean and improve the quality of the extracted data the framework produces a graph in which the network is represented only by its nodes and neighborhood-relations. The final output contains information about the adjacency matrix of the graph, the width of the edges and the positions of the nodes in space. NET also provides tools for statistical analysis of the network properties, such as the number of nodes or total network length. Other, more complex metrics can be calculated by importing the vectorized network to specialized network analysis packages. GeGUI is designed to facilitate manual correction of non-planar networks as these may contain artifacts or spurious junctions due to branches crossing each other. It is tailored for but not limited to the processing of networks from microscopy images of Drosophila tracheoles. The networks extracted by NET closely approximate the network depicted in the original image. NET is fast, yields reproducible results and is able to capture the full geometry of the network, including curved branches. Additionally GeGUI allows easy handling and visualization of the networks.

  20. Report of the Federal Internetworking Requirements Panel

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-05-31

    The Federal Internetworking Requirements Panel (FIRP) was established by the National Institute of Standards and Technology (NIST) to reassess Federal requirements for open systems networks and to recommend policy on the Government`s use of networking standards. The Panel was chartered to recommend actions which the Federal Government can take to address the short and long-term issues of interworking and convergence of networking protocols--particularly the Internet Protocol Suite (IPS) and Open Systems Interconnection (OSI) protocol suite and, when appropriate, proprietary protocols. The Panel was created at the request of the Office of Management and Budget in collaboration with the Federal Networking Council and the Federal Information Resources Management Policy Council. The Panel`s membership and charter are contained in an appendix to this report.

  1. Uncertainty quantification in computational fluid dynamics and aircraft engines

    CERN Document Server

    Montomoli, Francesco; D'Ammaro, Antonio; Massini, Michela; Salvadori, Simone

    2015-01-01

    This book introduces novel design techniques developed to increase the safety of aircraft engines. The authors demonstrate how the application of uncertainty methods can overcome problems in the accurate prediction of engine lift, caused by manufacturing error. This in turn ameliorates the difficulty of achieving required safety margins imposed by limits in current design and manufacturing methods. This text shows that even state-of-the-art computational fluid dynamics (CFD) are not able to predict the same performance measured in experiments; CFD methods assume idealised geometries but ideal geometries do not exist, cannot be manufactured and their performance differs from real-world ones. By applying geometrical variations of a few microns, the agreement with experiments improves dramatically, but unfortunately the manufacturing errors in engines or in experiments are unknown. In order to overcome this limitation, uncertainty quantification considers the probability density functions of manufacturing errors...

  2. MR Spectroscopy: Real-Time Quantification of in-vivo MR Spectroscopic data

    OpenAIRE

    Massé, Kunal

    2009-01-01

    In the last two decades, magnetic resonance spectroscopy (MRS) has had an increasing success in biomedical research. This technique has the faculty of discerning several metabolites in human tissue non-invasively and thus offers a multitude of medical applications. In clinical routine, quantification plays a key role in the evaluation of the different chemical elements. The quantification of metabolites characterizing specific pathologies helps physicians establish the patient's diagnosis. E...

  3. Airborne particle monitoring with urban closed-circuit television camera networks and a chromatic technique

    International Nuclear Information System (INIS)

    Kolupula, Y R; Jones, G R; Deakin, A G; Spencer, J W; Aceves-Fernandez, M A

    2010-01-01

    An economic approach for the preliminary assessment of 2–10 µm sized (PM10) airborne particle levels in urban areas is described. It uses existing urban closed-circuit television (CCTV) surveillance camera networks in combination with particle accumulating units and chromatic quantification of polychromatic light scattered by the captured particles. Methods for accommodating extraneous light effects are discussed and test results obtained from real urban sites are presented to illustrate the potential of the approach

  4. Class network routing

    Science.gov (United States)

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  5. Dense wavelength division multiplexing devices for metropolitan-area datacom and telecom networks

    Science.gov (United States)

    DeCusatis, Casimer M.; Priest, David G.

    2000-12-01

    Large data processing environments in use today can require multi-gigabyte or terabyte capacity in the data communication infrastructure; these requirements are being driven by storage area networks with access to petabyte data bases, new architecture for parallel processing which require high bandwidth optical links, and rapidly growing network applications such as electronic commerce over the Internet or virtual private networks. These datacom applications require high availability, fault tolerance, security, and the capacity to recover from any single point of failure without relying on traditional SONET-based networking. These requirements, coupled with fiber exhaust in metropolitan areas, are driving the introduction of dense optical wavelength division multiplexing (DWDM) in data communication systems, particularly for large enterprise servers or mainframes. In this paper, we examine the technical requirements for emerging nextgeneration DWDM systems. Protocols for storage area networks and computer architectures such as Parallel Sysplex are presented, including their fiber bandwidth requirements. We then describe two commercially available DWDM solutions, a first generation 10 channel system and a recently announced next generation 32 channel system. Technical requirements, network management and security, fault tolerant network designs, new network topologies enabled by DWDM, and the role of time division multiplexing in the network are all discussed. Finally, we present a description of testing conducted on these networks and future directions for this technology.

  6. Local AREA networks in advanced nuclear reactors

    International Nuclear Information System (INIS)

    Bicknell, J.; Keats, A.B.

    1984-01-01

    The report assesses Local Area Network Communications with a view to their application in advanced nuclear reactor control and protection systems. Attention is focussed on commercially available techniques and systems for achieving the high reliability and availability required. A basis for evaluating network characteristics in terms of broadband or baseband type, medium, topology, node structure and access method is established. The reliability and availability of networks is then discussed. Several commercial networks are briefly assessed and a distinction made between general purpose networks and those suitable for process control. The communications requirements of nuclear reactor control and protection systems are compared with the facilities provided by current technology

  7. Optimal transport on supply-demand networks.

    Science.gov (United States)

    Chen, Yu-Han; Wang, Bing-Hong; Zhao, Li-Chao; Zhou, Changsong; Zhou, Tao

    2010-06-01

    In the literature, transport networks are usually treated as homogeneous networks, that is, every node has the same function, simultaneously providing and requiring resources. However, some real networks, such as power grids and supply chain networks, show a far different scenario in which nodes are classified into two categories: supply nodes provide some kinds of services, while demand nodes require them. In this paper, we propose a general transport model for these supply-demand networks, associated with a criterion to quantify their transport capacities. In a supply-demand network with heterogeneous degree distribution, its transport capacity strongly depends on the locations of supply nodes. We therefore design a simulated annealing algorithm to find the near optimal configuration of supply nodes, which remarkably enhances the transport capacity compared with a random configuration and outperforms the degree target algorithm, the betweenness target algorithm, and the greedy method. This work provides a start point for systematically analyzing and optimizing transport dynamics on supply-demand networks.

  8. Handoff Between a Wireless Local Area Network (WLAN and a Wide Area Network (UMTS

    Directory of Open Access Journals (Sweden)

    J. Sánchez–García

    2009-04-01

    Full Text Available With the appearance of wireless data networks with variable coverage, band width and handoff strategies, in addition to the growing need of mobile nodes to freely roam among these networks, the support of an interoperable handoff strategy for hybrid wireless data networks is a requirement that needs to be addressed. The current trend in wireless data networks is to offer multimedia access to mobile users by employing the wireless local area network (WLAN standard IEEE802.11 while the user is located indoors; on the other hand, 3rd generation wireless networks (WAN are being deployed to provide coverage while the user is located outdoors. As a result, the mobile node will require a handoff mechanism to allow the user to roam between WLAN and WAN environments; up to this date several strategies have been proposed (Sattari et al., 2004 and HyoJin, 2007 in the literature, however, none of these have been standardized to date. To support this interoperability, the mobile node must be equipped with configurable wireless inetrfaces to support the handoff between the WLAN and the WAN networks. In this work a new algorithm is proposed to allow a mobile node to roam between a wireless local area network (IEEE802.11 and a WAN base station (UMTS, while employing IP mobility support. The algorithm is implemented in simulation, using the Network Simulator 2.

  9. Functional and nonfunctional testing of ATM networks

    Science.gov (United States)

    Ricardo, Manuel; Ferreira, M. E. P.; Guimaraes, Francisco E.; Mamede, J.; Henriques, M.; da Silva, Jorge A.; Carrapatoso, E.

    1995-02-01

    ATM network will support new multimedia services that will require new protocols, those services and protocols will need different test strategies and tools. In this paper, the concepts of functional and non-functional testers of ATM networks are discussed, a multimedia service and its requirements are presented and finally, a summary description of an ATM network and of the test tool that will be used to validate it are presented.

  10. Neural networks applied to characterize blends containing refined and extra virgin olive oils.

    Science.gov (United States)

    Aroca-Santos, Regina; Cancilla, John C; Pariente, Enrique S; Torrecilla, José S

    2016-12-01

    The identification and quantification of binary blends of refined olive oil with four different extra virgin olive oil (EVOO) varietals (Picual, Cornicabra, Hojiblanca and Arbequina) was carried out with a simple method based on combining visible spectroscopy and non-linear artificial neural networks (ANNs). The data obtained from the spectroscopic analysis was treated and prepared to be used as independent variables for a multilayer perceptron (MLP) model. The model was able to perfectly classify the EVOO varietal (100% identification rate), whereas the error for the quantification of EVOO in the mixtures containing between 0% and 20% of refined olive oil, in terms of the mean prediction error (MPE), was 2.14%. These results turn visible spectroscopy and MLP models into a trustworthy, user-friendly, low-cost technique which can be implemented on-line to characterize olive oil mixtures containing refined olive oil and EVOOs. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Underage Children and Social Networking

    Science.gov (United States)

    Weeden, Shalynn; Cooke, Bethany; McVey, Michael

    2013-01-01

    Despite minimum age requirements for joining popular social networking services such as Facebook, many students misrepresent their real ages and join as active participants in the networks. This descriptive study examines the use of social networking services (SNSs) by children under the age of 13. The researchers surveyed a sample of 199…

  12. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  13. User Requirements for Technology to Assist Aging in Place: Qualitative Study of Older People and Their Informal Support Networks.

    Science.gov (United States)

    Elers, Phoebe; Hunter, Inga; Whiddett, Dick; Lockhart, Caroline; Guesgen, Hans; Singh, Amardeep

    2018-06-06

    Informal support is essential for enabling many older people to age in place. However, there is limited research examining the information needs of older adults' informal support networks and how these could be met through home monitoring and information and communication technologies. The purpose of this study was to investigate how technologies that connect older adults to their informal and formal support networks could assist aging in place and enhance older adults' health and well-being. Semistructured interviews were conducted with 10 older adults and a total of 31 members of their self-identified informal support networks. They were asked questions about their information needs and how technology could support the older adults to age in place. The interviews were transcribed and thematically analyzed. The analysis identified three overarching themes: (1) the social enablers theme, which outlined how timing, informal support networks, and safety concerns assist the older adults' uptake of technology, (2) the technology concerns theme, which outlined concerns about cost, usability, information security and privacy, and technology superseding face-to-face contact, and (3) the information desired theme, which outlined what information should be collected and transferred and who should make decisions about this. Older adults and their informal support networks may be receptive to technology that monitors older adults within the home if it enables aging in place for longer. However, cost, privacy, security, and usability barriers would need to be considered and the system should be individualizable to older adults' changing needs. The user requirements identified from this study and described in this paper have informed the development of a technology that is currently being prototyped. ©Phoebe Elers, Inga Hunter, Dick Whiddett, Caroline Lockhart, Hans Guesgen, Amardeep Singh. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 06.06.2018.

  14. Swift Quantification of Fenofibrate and Tiemonium methylsulfate Active Ingredients in Solid Drugs Using Particle Induced X-Ray Emission

    International Nuclear Information System (INIS)

    Bejjani, A.; Nsouli, B.; Zahraman, K.; Assi, S.; Younes, Gh.; Yazbi, F.

    2011-01-01

    The quantification of active ingredients (AI) in drugs is a crucial and important step in the drug quality control process. This is usually performed by using wet chemical techniques like LC-MS, UV spectrophotometry and other appropriate organic analytical methods. However, if the active ingredient contains specific heteroatoms (F, S, Cl), elemental IBA like PIXE and PIGE techniques, using small tandem accelerator of 1-2 MV, can be explored for molecular quantification. IBA techniques permit the analysis of the sample under solid form, without any laborious sample preparations. In this work, we demonstrate the ability of the Thick Target PIXE technique for rapid and accurate quantification of both low and high concentrations of active ingredients in different commercial drugs. Fenofibrate, a chlorinated active ingredient, is present in high amounts in two different commercial drugs, its quantification was done using the relative approach to an external standard. On the other hand, Tiemonium methylsulfate which exists in relatively low amount in commercial drugs, its quantification was done using GUPIX simulation code (absolute quantification). The experimental aspects related to the quantification validity (use of external standards, absolute quantification, matrix effect,...) are presented and discussed. (author)

  15. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  16. WIRELESS SENSOR NETWORKS – ARCHITECTURE, SECURITY REQUIREMENTS, SECURITY THREATS AND ITS COUNTERMEASURES

    OpenAIRE

    Ranjit Panigrahi; Kalpana Sharma; M.K. Ghose

    2013-01-01

    Wireless Sensor Network (WSN) has a huge range of applications such as battlefield, surveillance, emergency rescue operation and smart home technology etc. Apart from its inherent constraints such as limited memory and energy resources, when deployed in hostile environmental conditions, the sensor nodes are vulnerable to physical capture and other security constraints. These constraints put security as a major challenge for the researchers in the field of computer networking. T...

  17. Virtualized cognitive network architecture for 5G cellular networks

    KAUST Repository

    Elsawy, Hesham

    2015-07-17

    Cellular networks have preserved an application agnostic and base station (BS) centric architecture1 for decades. Network functionalities (e.g. user association) are decided and performed regardless of the underlying application (e.g. automation, tactile Internet, online gaming, multimedia). Such an ossified architecture imposes several hurdles against achieving the ambitious metrics of next generation cellular systems. This article first highlights the features and drawbacks of such architectural ossification. Then the article proposes a virtualized and cognitive network architecture, wherein network functionalities are implemented via software instances in the cloud, and the underlying architecture can adapt to the application of interest as well as to changes in channels and traffic conditions. The adaptation is done in terms of the network topology by manipulating connectivities and steering traffic via different paths, so as to attain the applications\\' requirements and network design objectives. The article presents cognitive strategies to implement some of the classical network functionalities, along with their related implementation challenges. The article further presents a case study illustrating the performance improvement of the proposed architecture as compared to conventional cellular networks, both in terms of outage probability and handover rate.

  18. Co-operative intra-protein structural response due to protein-protein complexation revealed through thermodynamic quantification: study of MDM2-p53 binding.

    Science.gov (United States)

    Samanta, Sudipta; Mukherjee, Sanchita

    2017-10-01

    The p53 protein activation protects the organism from propagation of cells with damaged DNA having oncogenic mutations. In normal cells, activity of p53 is controlled by interaction with MDM2. The well understood p53-MDM2 interaction facilitates design of ligands that could potentially disrupt or prevent the complexation owing to its emergence as an important objective for cancer therapy. However, thermodynamic quantification of the p53-peptide induced structural changes of the MDM2-protein remains an area to be explored. This study attempts to understand the conformational free energy and entropy costs due to this complex formation from the histograms of dihedral angles generated from molecular dynamics simulations. Residue-specific quantification illustrates that, hydrophobic residues of the protein contribute maximum to the conformational thermodynamic changes. Thermodynamic quantification of structural changes of the protein unfold the fact that, p53 binding provides a source of inter-element cooperativity among the protein secondary structural elements, where the highest affected structural elements (α2 and α4) found at the binding site of the protein affects faraway structural elements (β1 and Loop1) of the protein. The communication perhaps involves water mediated hydrogen bonded network formation. Further, we infer that in inhibitory F19A mutation of P53, though Phe19 is important in the recognition process, it has less prominent contribution in the stability of the complex. Collectively, this study provides vivid microscopic understanding of the interaction within the protein complex along with exploring mutation sites, which will contribute further to engineer the protein function and binding affinity.

  19. Co-operative intra-protein structural response due to protein-protein complexation revealed through thermodynamic quantification: study of MDM2-p53 binding

    Science.gov (United States)

    Samanta, Sudipta; Mukherjee, Sanchita

    2017-10-01

    The p53 protein activation protects the organism from propagation of cells with damaged DNA having oncogenic mutations. In normal cells, activity of p53 is controlled by interaction with MDM2. The well understood p53-MDM2 interaction facilitates design of ligands that could potentially disrupt or prevent the complexation owing to its emergence as an important objective for cancer therapy. However, thermodynamic quantification of the p53-peptide induced structural changes of the MDM2-protein remains an area to be explored. This study attempts to understand the conformational free energy and entropy costs due to this complex formation from the histograms of dihedral angles generated from molecular dynamics simulations. Residue-specific quantification illustrates that, hydrophobic residues of the protein contribute maximum to the conformational thermodynamic changes. Thermodynamic quantification of structural changes of the protein unfold the fact that, p53 binding provides a source of inter-element cooperativity among the protein secondary structural elements, where the highest affected structural elements (α2 and α4) found at the binding site of the protein affects faraway structural elements (β1 and Loop1) of the protein. The communication perhaps involves water mediated hydrogen bonded network formation. Further, we infer that in inhibitory F19A mutation of P53, though Phe19 is important in the recognition process, it has less prominent contribution in the stability of the complex. Collectively, this study provides vivid microscopic understanding of the interaction within the protein complex along with exploring mutation sites, which will contribute further to engineer the protein function and binding affinity.

  20. Noninvasive Quantification of Pancreatic Fat in Humans

    OpenAIRE

    Lingvay, Ildiko; Esser, Victoria; Legendre, Jaime L.; Price, Angela L.; Wertz, Kristen M.; Adams-Huet, Beverley; Zhang, Song; Unger, Roger H.; Szczepaniak, Lidia S.

    2009-01-01

    Objective: To validate magnetic resonance spectroscopy (MRS) as a tool for non-invasive quantification of pancreatic triglyceride (TG) content and to measure the pancreatic TG content in a diverse human population with a wide range of body mass index (BMI) and glucose control.

  1. 15 CFR 990.52 - Injury assessment-quantification.

    Science.gov (United States)

    2010-01-01

    ... (Continued) NATIONAL OCEANIC AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE OIL POLLUTION ACT..., trustees must quantify the degree, and spatial and temporal extent of such injuries relative to baseline. (b) Quantification approaches. Trustees may quantify injuries in terms of: (1) The degree, and...

  2. Future Network Architectures

    DEFF Research Database (Denmark)

    Wessing, Henrik; Bozorgebrahimi, Kurosh; Belter, Bartosz

    2015-01-01

    This study identifies key requirements for NRENs towards future network architectures that become apparent as users become more mobile and have increased expectations in terms of availability of data. In addition, cost saving requirements call for federated use of, in particular, the optical...

  3. Network-assisted crop systems genetics: network inference and integrative analysis.

    Science.gov (United States)

    Lee, Tak; Kim, Hyojin; Lee, Insuk

    2015-04-01

    Although next-generation sequencing (NGS) technology has enabled the decoding of many crop species genomes, most of the underlying genetic components for economically important crop traits remain to be determined. Network approaches have proven useful for the study of the reference plant, Arabidopsis thaliana, and the success of network-based crop genetics will also require the availability of a genome-scale functional networks for crop species. In this review, we discuss how to construct functional networks and elucidate the holistic view of a crop system. The crop gene network then can be used for gene prioritization and the analysis of resequencing-based genome-wide association study (GWAS) data, the amount of which will rapidly grow in the field of crop science in the coming years. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Voltammetric Quantification of Paraquat and Glyphosate in Surface Waters

    Directory of Open Access Journals (Sweden)

    William Roberto Alza-Camacho

    2016-09-01

    Full Text Available The indiscriminate use of pesticides on crops has a negative environmental impact that affects organisms, soil and water resources, essential for life. Therefore, it is necessary to evaluate the residual effect of these substances in water sources. A simple, affordable and accessible electrochemical method for Paraquat and Glyphosate quantification in water was developed. The study was conducted using as supporting electrolyte Britton-Robinson buffer solution, working electrode of glassy carbon, Ag/AgCl as the reference electrode, and platinum as auxiliary electrode. Differential pulse voltammetry (VDP method for both compounds were validated. Linearity of the methods presented a correlation coefficient of 0.9949 and 0.9919 and the limits of detection and quantification were 130 and 190 mg/L for Paraquat and 40 and 50 mg/L for glyphosate. Comparison with the reference method showed that the electrochemical method provides superior results in quantification of analytes. Of the samples tested, a value of Paraquat was between 0,011 to 1,572 mg/L and for glyphosate it was between 0.201 to 2.777 mg/L, indicating that these compounds are present in water sources and that those may be causing serious problems to human health.

  5. HPLC Quantification of astaxanthin and canthaxanthin in Salmonidae eggs.

    Science.gov (United States)

    Tzanova, Milena; Argirova, Mariana; Atanasov, Vasil

    2017-04-01

    Astaxanthin and canthaxanthin are naturally occurring antioxidants referred to as xanthophylls. They are used as food additives in fish farms to improve the organoleptic qualities of salmonid products and to prevent reproductive diseases. This study reports the development and single-laboratory validation of a rapid method for quantification of astaxanthin and canthaxanthin in eggs of rainbow trout (Oncorhynchus mykiss) and brook trout (Salvelinus fontinalis М.). An advantage of the proposed method is the perfect combination of selective extraction of the xanthophylls and analysis of the extract by high-performance liquid chromatography and photodiode array detection. The method validation was carried out in terms of linearity, accuracy, precision, recovery and limits of detection and quantification. The method was applied for simultaneous quantification of the two xanthophylls in eggs of rainbow trout and brook trout after their selective extraction. The results show that astaxanthin accumulations in salmonid fish eggs are larger than those of canthaxanthin. As the levels of these two xanthophylls affect fish fertility, this method can be used to improve the nutritional quality and to minimize the occurrence of the M74 syndrome in fish populations. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Nuclear and mitochondrial DNA quantification of various forensic materials.

    Science.gov (United States)

    Andréasson, H; Nilsson, M; Budowle, B; Lundberg, H; Allen, M

    2006-12-01

    Due to the different types and quality of forensic evidence materials, their DNA content can vary substantially, and particularly low quantities can impact the results in an identification analysis. In this study, the quantity of mitochondrial and nuclear DNA was determined in a variety of materials using a previously described real-time PCR method. DNA quantification in the roots and distal sections of plucked and shed head hairs revealed large variations in DNA content particularly between the root and the shaft of plucked hairs. Also large intra- and inter-individual variations were found among hairs. In addition, DNA content was estimated in samples collected from fingerprints and accessories. The quantification of DNA on various items also displayed large variations, with some materials containing large amounts of nuclear DNA while no detectable nuclear DNA and only limited amounts of mitochondrial DNA were seen in others. Using this sensitive real-time PCR quantification assay, a better understanding was obtained regarding DNA content and variation in commonly analysed forensic evidence materials and this may guide the forensic scientist as to the best molecular biology approach for analysing various forensic evidence materials.

  7. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  8. Occipital and occipital "plus" epilepsies: A study of involved epileptogenic networks through SEEG quantification.

    Science.gov (United States)

    Marchi, Angela; Bonini, Francesca; Lagarde, Stanislas; McGonigal, Aileen; Gavaret, Martine; Scavarda, Didier; Carron, Romain; Aubert, Sandrine; Villeneuve, Nathalie; Médina Villalon, Samuel; Bénar, Christian; Trebuchon, Agnes; Bartolomei, Fabrice

    2016-09-01

    Compared with temporal or frontal lobe epilepsies, the occipital lobe epilepsies (OLE) remain poorly characterized. In this study, we aimed at classifying the ictal networks involving OLE and investigated clinical features of the OLE network subtypes. We studied 194 seizures from 29 consecutive patients presenting with OLE and investigated by stereoelectroencephalography (SEEG). Epileptogenicity of occipital and extraoccipital regions was quantified according to the 'epileptogenicity index' (EI) method. We found that 79% of patients showed widespread epileptogenic zone organization, involving parietal or temporal regions in addition to the occipital lobe. Two main groups of epileptogenic zone organization within occipital lobe seizures were identified: a pure occipital group and an occipital "plus" group, the latter including two further subgroups, occipitotemporal and occipitoparietal. In 29% of patients, the epileptogenic zone was found to have a bilateral organization. The most epileptogenic structure was the fusiform gyrus (mean EI: 0.53). Surgery was proposed in 18/29 patients, leading to seizure freedom in 55% (Engel Class I). Results suggest that, in patient candidates for surgery, the majority of cases are characterized by complex organization of the EZ, corresponding to the occipital plus group. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. 2D histomorphometric quantification from 3D computerized tomography

    International Nuclear Information System (INIS)

    Lima, Inaya; Oliveira, Luis Fernando de; Lopes, Ricardo T.; Jesus, Edgar Francisco O. de; Alves, Jose Marcos

    2002-01-01

    In the present article, preliminary results are presented showing the application of the tridimensional computerized microtomographic technique (3D-μCT) to bone tissue characterization, through histomorphometric quantification which are based on stereologic concepts. Two samples of human bone were correctly prepared to be submitted to the tomographic system. The system used to realize that process were a radiographic system with a microfocus X-ray tube. Through these three processes, acquisition, reconstruction and quantification, it was possible to get the good results and coherent to the literature data. From this point, it is intended to compare these results with the information due the conventional method, that is, conventional histomorphometry. (author)

  10. Socially Aware Heterogeneous Wireless Networks.

    Science.gov (United States)

    Kosmides, Pavlos; Adamopoulou, Evgenia; Demestichas, Konstantinos; Theologou, Michael; Anagnostou, Miltiades; Rouskas, Angelos

    2015-06-11

    The development of smart cities has been the epicentre of many researchers' efforts during the past decade. One of the key requirements for smart city networks is mobility and this is the reason stable, reliable and high-quality wireless communications are needed in order to connect people and devices. Most research efforts so far, have used different kinds of wireless and sensor networks, making interoperability rather difficult to accomplish in smart cities. One common solution proposed in the recent literature is the use of software defined networks (SDNs), in order to enhance interoperability among the various heterogeneous wireless networks. In addition, SDNs can take advantage of the data retrieved from available sensors and use them as part of the intelligent decision making process contacted during the resource allocation procedure. In this paper, we propose an architecture combining heterogeneous wireless networks with social networks using SDNs. Specifically, we exploit the information retrieved from location based social networks regarding users' locations and we attempt to predict areas that will be crowded by using specially-designed machine learning techniques. By recognizing possible crowded areas, we can provide mobile operators with recommendations about areas requiring datacell activation or deactivation.

  11. Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package

    Science.gov (United States)

    Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen

    2016-04-01

    We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].

  12. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Using networking and communications software in business

    CERN Document Server

    McBride, PK

    2014-01-01

    Using Networking and Communications Software in Business covers the importance of networks in a business firm, the benefits of computer communications within a firm, and the cost-benefit in putting up networks in businesses. The book is divided into six parts. Part I looks into the nature and varieties of networks, networking standards, and network software. Part II discusses the planning of a networked system, which includes analyzing the requirements for the network system, the hardware for the network, and network management. The installation of the network system and the network managemen

  14. Pricing strategies under heterogeneous service requirements

    NARCIS (Netherlands)

    Mandjes, M.R.H.

    This paper analyzes a communication network, used by customers with heterogeneous service requirements. We investigate priority queueing as a way to establish service differentiation. It is assumed that there is an infinite population of customers, who join the network as long as their utility

  15. Quantification of rice bran oil in oil blends

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, R.; Sharma, H. K.; Sengar, G.

    2012-11-01

    Blends consisting of physically refined rice bran oil (PRBO): sunflower oil (SnF) and PRBO: safflower oil (SAF) in different proportions were analyzed for various physicochemical parameters. The quantification of pure rice bran oil in the blended oils was carried out using different methods including gas chromatographic, HPLC, ultrasonic velocity and methods based on physico-chemical parameters. The physicochemical parameters such as ultrasonic velocity, relative association and acoustic impedance at 2 MHz, iodine value, palmitic acid content and oryzanol content reflected significant changes with increased proportions of PRBO in the blended oils. These parameters were selected as dependent parameters and % PRBO proportion was selected as independent parameters. The study revealed that regression equations based on the oryzanol content, palmitic acid composition, ultrasonic velocity, relative association, acoustic impedance, and iodine value can be used for the quantification of rice bran oil in blended oils. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at a 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above the 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification of rice bran oil. (Author) 23 refs.

  16. Quantification of the Pyrrolizidine Alkaloid Jacobine in Crassocephalum crepidioides by Cation Exchange High-Performance Liquid Chromatography.

    Science.gov (United States)

    Rozhon, Wilfried; Kammermeier, Lukas; Schramm, Sebastian; Towfique, Nayeem; Adebimpe Adedeji, N; Adesola Ajayi, S; Poppenberger, Brigitte

    2018-01-01

    Pyrrolizidine alkaloids (PAs) are secondary plant metabolites with considerable hepatoxic, tumorigenic and genotoxic potential. For separation, reversed phase chromatography is commonly used because of its excellent compatibility with detection by mass spectrometry. However, reversed phase chromatography has a low selectivity for PAs. The objective of this work was to investigate the suitability of cation exchange chromatography for separation of PAs and to develop a rapid method for quantification of jacobine in Crassocephalum crepidioides that is suitable for analysis of huge sample numbers as required for mutant screening procedures. We demonstrate that cation exchange chromatography offers excellent selectivity for PAs allowing their separation from most other plant metabolites. Due to the high selectivity, plant extracts can be directly analysed after simple sample preparation. Detection with UV at 200 nm instead of mass spectrometry can be applied, which makes the method very simple and cost-effective. The recovery rate of the method exceeded 95%, the intra-day and inter-day standard deviations were below 7% and the limit of detection and quantification were 1 mg/kg and 3 mg/kg, respectively. The developed method is sufficiently sensitive for reproducible detection of jacobine in C. crepidioides. Simple sample preparation and rapid separation allows for quantification of jacobine in plant material in a high-throughput manner. Thus, the method is suitable for genetic screenings and may be applicable for other plant species, for instance Jacobaea maritima. In addition, our results show that C. crepidioides cannot be considered safe for human consumption. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Requirements and Algorithms for Cooperation of Heterogeneous Radio Access Networks

    DEFF Research Database (Denmark)

    Mihovska, Albena D.; Tragos, Elias; Mino, Emilio

    2009-01-01

    systems.The RRM mechanisms are evaluated for the scenario of intra-RAN and inter-RAN user mobility. The RRM framework incorporates as novelty improved triggering mechanisms, a network-controlledmobility management scheme with policy enforcement on different levels in the RANarchitecture, and a distributed...

  18. HPLC for simultaneous quantification of total ceramide, glucosylceramide, and ceramide trihexoside concentrations in plasma

    NARCIS (Netherlands)

    Groener, Johanna E. M.; Poorthuis, Ben J. H. M.; Kuiper, Sijmen; Helmond, Mariette T. J.; Hollak, Carla E. M.; Aerts, Johannes M. F. G.

    2007-01-01

    BACKGROUND: Simple, reproducible assays are needed for the quantification of sphingolipids, ceramide (Cer), and sphingoid bases. We developed an HPLC method for simultaneous quantification of total plasma concentrations of Cer, glucosylceramide (GlcCer), and ceramide trihexoside (CTH). METHODS:

  19. Why Failing Terrorist Groups Persist Revisited: A Social Network Approach to AQIM Network Resilience

    Science.gov (United States)

    2017-12-01

    the approach and methods used in this analysis to organize, analyze, and explore the geospatial, statistical , and social network data...requirements for the degree of MASTER OF SCIENCE IN INFORMATION STRATEGY AND POLITICAL WARFARE from the NAVAL POSTGRADUATE SCHOOL December...research utilizes both descriptive statistics and regression analysis of social network data to explore the changes within the AQIM network 2012

  20. Quantification of coating aging using impedance measurements

    NARCIS (Netherlands)

    Westing, E.P.M. van; Weijde, D.H. van der; Vreijling, M.P.W.; Ferrari, G.M.; Wit, J.H.W. de

    1998-01-01

    This chapter shows the application results of a novel approach to quantify the ageing of organic coatings using impedance measurements. The ageing quantification is based on the typical impedance behaviour of barrier coatings in immersion. This immersion behaviour is used to determine the limiting