WorldWideScience

Sample records for networks requires quantification

  1. Future Home Network Requirements

    DEFF Research Database (Denmark)

    Charbonnier, Benoit; Wessing, Henrik; Lannoo, Bart

    This paper presents the requirements for future Home Area Networks (HAN). Firstly, we discuss the applications and services as well as their requirements. Then, usage scenarios are devised to establish a first specification for the HAN. The main requirements are an increased bandwidth (towards 1...

  2. NP Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Rotman, Lauren [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Tierney, Brian [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)

    2011-08-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. To support SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2011, ESnet and the Office of Nuclear Physics (NP), of the DOE SC, organized a workshop to characterize the networking requirements of the programs funded by NP. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  3. LHCb Online Networking Requirements

    CERN Document Server

    Jost, B

    2003-01-01

    This document describes the networking requirements of the LHCb online installation. It lists both quantitative aspects such as the number of required switch ports, as well as some qualitative features of the equipment, such as minimum buffer sizes in switches. The document comprises both the data acquisition network and the controls/general-purpose network. While the numbers represent our best current knowledge and are intended to give (in particular) network equipment manufacturers an overview of our needs, this document should not be confused with a market survey questionnaire or a formal tendering document. However the information contained in this document will be the input of any such document. A preliminary schedule for procurement and installation is also given.

  4. BES Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Biocca, Alan; Carlson, Rich; Chen, Jackie; Cotter, Steve; Tierney, Brian; Dattoria, Vince; Davenport, Jim; Gaenko, Alexander; Kent, Paul; Lamm, Monica; Miller, Stephen; Mundy, Chris; Ndousse, Thomas; Pederson, Mark; Perazzo, Amedeo; Popescu, Razvan; Rouson, Damian; Sekine, Yukiko; Sumpter, Bobby; Dart, Eli; Wang, Cai-Zhuang -Z; Whitelam, Steve; Zurawski, Jason

    2011-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivityfor the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office ofScience programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

  5. BES Science Network Requirements

    International Nuclear Information System (INIS)

    Dart, Eli; Tierney, Brian; Biocca, A.; Carlson, R.; Chen, J.; Cotter, S.; Dattoria, V.; Davenport, J.; Gaenko, A.; Kent, P.; Lamm, M.; Miller, S.; Mundy, C.; Ndousse, T.; Pederson, M.; Perazzo, A.; Popescu, R.; Rouson, D.; Sekine, Y.; Sumpter, B.; Wang, C.-Z.; Whitelam, S.; Zurawski, J.

    2011-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

  6. BER Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Alapaty, Kiran; Allen, Ben; Bell, Greg; Benton, David; Brettin, Tom; Canon, Shane; Dart, Eli; Cotter, Steve; Crivelli, Silvia; Carlson, Rich; Dattoria, Vince; Desai, Narayan; Egan, Richard; Tierney, Brian; Goodwin, Ken; Gregurick, Susan; Hicks, Susan; Johnston, Bill; de Jong, Bert; Kleese van Dam, Kerstin; Livny, Miron; Markowitz, Victor; McGraw, Jim; McCord, Raymond; Oehmen, Chris; Regimbal, Kevin; Shipman, Galen; Strand, Gary; Flick, Jeff; Turnbull, Susan; Williams, Dean; Zurawski, Jason

    2010-11-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2010 ESnet and the Office of Biological and Environmental Research, of the DOE Office of Science, organized a workshop to characterize the networking requirements of the science programs funded by BER. The requirements identified at the workshop are summarized and described in more detail in the case studies and the Findings section. A number of common themes emerged from the case studies and workshop discussions. One is that BER science, like many other disciplines, is becoming more and more distributed and collaborative in nature. Another common theme is that data set sizes are exploding. Climate Science in particular is on the verge of needing to manage exabytes of data, and Genomics is on the verge of a huge paradigm shift in the number of sites with sequencers and the amount of sequencer data being generated.

  7. ASCR Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Tierney, Brian

    2009-08-24

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2009 ESnet and the Office of Advanced Scientific Computing Research (ASCR), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by ASCR. The ASCR facilities anticipate significant increases in wide area bandwidth utilization, driven largely by the increased capabilities of computational resources and the wide scope of collaboration that is a hallmark of modern science. Many scientists move data sets between facilities for analysis, and in some cases (for example the Earth System Grid and the Open Science Grid), data distribution is an essential component of the use of ASCR facilities by scientists. Due to the projected growth in wide area data transfer needs, the ASCR supercomputer centers all expect to deploy and use 100 Gigabit per second networking technology for wide area connectivity as soon as that deployment is financially feasible. In addition to the network connectivity that ESnet provides, the ESnet Collaboration Services (ECS) are critical to several science communities. ESnet identity and trust services, such as the DOEGrids certificate authority, are widely used both by the supercomputer centers and by collaborations such as Open Science Grid (OSG) and the Earth System Grid (ESG). Ease of use is a key determinant of the scientific utility of network-based services. Therefore, a key enabling aspect for scientists beneficial use of high

  8. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    Science.gov (United States)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair

  9. Fusion Energy Sciences Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [ESNet, Berkeley, CA (United States); Tierney, Brian [ESNet, Berkeley, CA (United States)

    2012-09-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In December 2011, ESnet and the Office of Fusion Energy Sciences (FES), of the DOE Office of Science (SC), organized a workshop to characterize the networking requirements of the programs funded by FES. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  10. Soil Pore Network Visualisation and Quantification using ImageJ

    DEFF Research Database (Denmark)

    Garbout, Amin; Pajor, Radoslaw; Otten, Wilfred

    Abstract Soil is one of the most complex materials on the earth, within which many biological, physical and chemical processes that support life and affect climate change take place. A much more detailed knowledge of the soil system is required to improve our ability to develop soil management...... strategies to preserve this limited resource. Many of those processes occur at micro scales. For long our ability to study soils non-destructively at microscopic scales has been limited, but recent developments in the use of X-ray Computed Tomography has offered great opportunities to quantify the 3-D...... geometry of soil pores. In this study we look at how networks that summarize the geometry of pores in soil are affected by soil structure. One of the objectives is to develop a robust and reproducible image analysis technique to produce quantitative knowledge on soil architecture from high resolution 3D...

  11. Biological and Environmental Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Balaji, V. [Princeton Univ., NJ (United States). Earth Science Grid Federation (ESGF); Boden, Tom [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cowley, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dart, Eli [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Dattoria, Vince [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Desai, Narayan [Argonne National Lab. (ANL), Argonne, IL (United States); Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Foster, Ian [Argonne National Lab. (ANL), Argonne, IL (United States); Goldstone, Robin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gregurick, Susan [U.S. Dept. of Energy, Washington, DC (United States). Biological Systems Science Division; Houghton, John [U.S. Dept. of Energy, Washington, DC (United States). Biological and Environmental Research (BER) Program; Izaurralde, Cesar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnston, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Joseph, Renu [U.S. Dept. of Energy, Washington, DC (United States). Climate and Environmental Sciences Division; Kleese-van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lipton, Mary [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Monga, Inder [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Pritchard, Matt [British Atmospheric Data Centre (BADC), Oxon (United Kingdom); Rotman, Lauren [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Strand, Gary [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Stuart, Cory [Argonne National Lab. (ANL), Argonne, IL (United States); Tatusova, Tatiana [National Inst. of Health (NIH), Bethesda, MD (United States); Tierney, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Thomas, Brian [Univ. of California, Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zurawski, Jason [Internet2, Washington, DC (United States)

    2013-09-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

  12. HEP Science Network Requirements--Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

    2010-04-27

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans

  13. HEP Science Network Requirements. Final Report

    International Nuclear Information System (INIS)

    Dart, Eli; Tierney, Brian

    2010-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity

  14. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  15. Network-Based Isoform Quantification with RNA-Seq Data for Cancer Transcriptome Analysis.

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2015-12-01

    Full Text Available High-throughput mRNA sequencing (RNA-Seq is widely used for transcript quantification of gene isoforms. Since RNA-Seq data alone is often not sufficient to accurately identify the read origins from the isoforms for quantification, we propose to explore protein domain-domain interactions as prior knowledge for integrative analysis with RNA-Seq data. We introduce a Network-based method for RNA-Seq-based Transcript Quantification (Net-RSTQ to integrate protein domain-domain interaction network with short read alignments for transcript abundance estimation. Based on our observation that the abundances of the neighboring isoforms by domain-domain interactions in the network are positively correlated, Net-RSTQ models the expression of the neighboring transcripts as Dirichlet priors on the likelihood of the observed read alignments against the transcripts in one gene. The transcript abundances of all the genes are then jointly estimated with alternating optimization of multiple EM problems. In simulation Net-RSTQ effectively improved isoform transcript quantifications when isoform co-expressions correlate with their interactions. qRT-PCR results on 25 multi-isoform genes in a stem cell line, an ovarian cancer cell line, and a breast cancer cell line also showed that Net-RSTQ estimated more consistent isoform proportions with RNA-Seq data. In the experiments on the RNA-Seq data in The Cancer Genome Atlas (TCGA, the transcript abundances estimated by Net-RSTQ are more informative for patient sample classification of ovarian cancer, breast cancer and lung cancer. All experimental results collectively support that Net-RSTQ is a promising approach for isoform quantification. Net-RSTQ toolbox is available at http://compbio.cs.umn.edu/Net-RSTQ/.

  16. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  17. Contemporary Network Proteomics and Its Requirements

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon; Sng, Judy Chia Ghee

    2013-01-01

    The integration of networks with genomics (network genomics) is a familiar field. Conventional network analysis takes advantage of the larger coverage and relative stability of gene expression measurements. Network proteomics on the other hand has to develop further on two critical factors: (1) expanded data coverage and consistency, and (2) suitable reference network libraries, and data mining from them. Concerning (1) we discuss several contemporary themes that can improve data quality, which in turn will boost the outcome of downstream network analysis. For (2), we focus on network analysis developments, specifically, the need for context-specific networks and essential considerations for localized network analysis. PMID:24833333

  18. Belle-II Experiment Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Asner, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Bell, Greg [ESnet; Carlson, Tim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Cowley, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Dart, Eli [ESnet; Erwin, Brock [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Godang, Romulus [Univ. of South Alabama, Mobile, AL (United States); Hara, Takanori [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Johnson, Jerry [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Johnson, Ron [Univ. of Washington, Seattle, WA (United States); Johnston, Bill [ESnet; Dam, Kerstin Kleese-van [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Kaneko, Toshiaki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Kubota, Yoshihiro [NII; Kuhr, Thomas [Karlsruhe Inst. of Technology (KIT) (Germany); McCoy, John [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Miyake, Hideki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Monga, Inder [ESnet; Nakamura, Motonori [NII; Piilonen, Leo [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Pordes, Ruth [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ray, Douglas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Russell, Richard [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schram, Malachi [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schroeder, Jim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Sevior, Martin [Univ. of Melbourne (Australia); Singh, Surya [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Suzuki, Soh [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Sasaki, Takashi [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Williams, Jim [Indiana Univ., Bloomington, IN (United States)

    2013-05-28

    The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is expected in 2015. In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration's work.

  19. What would dense atmospheric observation networks bring to the quantification of city CO2 emissions?

    Science.gov (United States)

    Wu, Lin; Broquet, Grégoire; Ciais, Philippe; Bellassen, Valentin; Vogel, Felix; Chevallier, Frédéric; Xueref-Remy, Irène; Wang, Yilong

    2016-06-01

    Cities currently covering only a very small portion ( directly release to the atmosphere about 44 % of global energy-related CO2, but they are associated with 71-76 % of CO2 emissions from global final energy use. Although many cities have set voluntary climate plans, their CO2 emissions are not evaluated by the monitoring, reporting, and verification (MRV) procedures that play a key role for market- or policy-based mitigation actions. Here we analyze the potential of a monitoring tool that could support the development of such procedures at the city scale. It is based on an atmospheric inversion method that exploits inventory data and continuous atmospheric CO2 concentration measurements from a network of stations within and around cities to estimate city CO2 emissions. This monitoring tool is configured for the quantification of the total and sectoral CO2 emissions in the Paris metropolitan area (˜ 12 million inhabitants and 11.4 TgC emitted in 2010) during the month of January 2011. Its performances are evaluated in terms of uncertainty reduction based on observing system simulation experiments (OSSEs). They are analyzed as a function of the number of sampling sites (measuring at 25 m a.g.l.) and as a function of the network design. The instruments presently used to measure CO2 concentrations at research stations are expensive (typically ˜ EUR 50 k per sensor), which has limited the few current pilot city networks to around 10 sites. Larger theoretical networks are studied here to assess the potential benefit of hypothetical operational lower-cost sensors. The setup of our inversion system is based on a number of diagnostics and assumptions from previous city-scale inversion experiences with real data. We find that, given our assumptions underlying the configuration of the OSSEs, with 10 stations only the uncertainty for the total city CO2 emission during 1 month is significantly reduced by the inversion by ˜ 42 %. It can be further reduced by extending the

  20. Advanced communication and network requirements in Europe

    DEFF Research Database (Denmark)

    Falch, Morten; Enemark, Rasmus

    The report address diffusion of new tele-application, focusing on potential use and potential tele-trafic genrated as a consequense. The applications investigated are: Teleworking, distance learning, research and university network, applications aimed at SMEs, health networks, a trans European pu...... public administation network, city information highway, road-trafic manegement, air traffic control and electronic quotation.......The report address diffusion of new tele-application, focusing on potential use and potential tele-trafic genrated as a consequense. The applications investigated are: Teleworking, distance learning, research and university network, applications aimed at SMEs, health networks, a trans European...

  1. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R.

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general. PMID:26536227

  2. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  3. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Directory of Open Access Journals (Sweden)

    Udit Bhatia

    Full Text Available The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  4. Pore network quantification of sandstones under experimental CO2 injection using image analysis

    Science.gov (United States)

    Berrezueta, Edgar; González-Menéndez, Luís; Ordóñez-Casado, Berta; Olaya, Peter

    2015-04-01

    Automated-image identification and quantification of minerals, pores and textures together with petrographic analysis can be applied to improve pore system characterization in sedimentary rocks. Our case study is focused on the application of these techniques to study the evolution of rock pore network subjected to super critical CO2-injection. We have proposed a Digital Image Analysis (DIA) protocol that guarantees measurement reproducibility and reliability. This can be summarized in the following stages: (i) detailed description of mineralogy and texture (before and after CO2-injection) by optical and scanning electron microscopy (SEM) techniques using thin sections; (ii) adjustment and calibration of DIA tools; (iii) data acquisition protocol based on image capture with different polarization conditions (synchronized movement of polarizers); (iv) study and quantification by DIA that allow (a) identification and isolation of pixels that belong to the same category: minerals vs. pores in each sample and (b) measurement of changes in pore network, after the samples have been exposed to new conditions (in our case: SC-CO2-injection). Finally, interpretation of the petrography and the measured data by an automated approach were done. In our applied study, the DIA results highlight the changes observed by SEM and microscopic techniques, which consisted in a porosity increase when CO2 treatment occurs. Other additional changes were minor: variations in the roughness and roundness of pore edges, and pore aspect ratio, shown in the bigger pore population. Additionally, statistic tests of pore parameters measured were applied to verify that the differences observed between samples before and after CO2-injection were significant.

  5. A microsensor array for quantification of lubricant contaminants using a back propagation artificial neural network

    International Nuclear Information System (INIS)

    Zhu, Xiaoliang; Du, Li; Zhe, Jiang; Liu, Bendong

    2016-01-01

    We present a method based on an electrochemical sensor array and a back propagation artificial neural network for detection and quantification of four properties of lubrication oil, namely water (0, 500 ppm, 1000 ppm), total acid number (TAN) (13.1, 13.7, 14.4, 15.6 mg KOH g −1 ), soot (0, 1%, 2%, 3%) and sulfur content (1.3%, 1.37%, 1.44%, 1.51%). The sensor array, consisting of four micromachined electrochemical sensors, detects the four properties with overlapping sensitivities. A total set of 36 oil samples containing mixtures of water, soot, and sulfuric acid with different concentrations were prepared for testing. The sensor array’s responses were then divided to three sets: training sets (80% data), validation sets (10%) and testing sets (10%). Several back propagation artificial neural network architectures were trained with the training and validation sets; one architecture with four input neurons, 50 and 5 neurons in the first and second hidden layer, and four neurons in the output layer was selected. The selected neural network was then tested using the four sets of testing data (10%). Test results demonstrated that the developed artificial neural network is able to quantitatively determine the four lubrication properties (water, TAN, soot, and sulfur content) with a maximum prediction error of 18.8%, 6.0%, 6.7%, and 5.4%, respectively, indicting a good match between the target and predicted values. With the developed network, the sensor array could be potentially used for online lubricant oil condition monitoring. (paper)

  6. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    Science.gov (United States)

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  7. Quantification of Dynamic Model Validation Metrics Using Uncertainty Propagation from Requirements

    Science.gov (United States)

    Brown, Andrew M.; Peck, Jeffrey A.; Stewart, Eric C.

    2018-01-01

    The Space Launch System, NASA's new large launch vehicle for long range space exploration, is presently in the final design and construction phases, with the first launch scheduled for 2019. A dynamic model of the system has been created and is critical for calculation of interface loads and natural frequencies and mode shapes for guidance, navigation, and control (GNC). Because of the program and schedule constraints, a single modal test of the SLS will be performed while bolted down to the Mobile Launch Pad just before the first launch. A Monte Carlo and optimization scheme will be performed to create thousands of possible models based on given dispersions in model properties and to determine which model best fits the natural frequencies and mode shapes from modal test. However, the question still remains as to whether this model is acceptable for the loads and GNC requirements. An uncertainty propagation and quantification (UP and UQ) technique to develop a quantitative set of validation metrics that is based on the flight requirements has therefore been developed and is discussed in this paper. There has been considerable research on UQ and UP and validation in the literature, but very little on propagating the uncertainties from requirements, so most validation metrics are "rules-of-thumb;" this research seeks to come up with more reason-based metrics. One of the main assumptions used to achieve this task is that the uncertainty in the modeling of the fixed boundary condition is accurate, so therefore that same uncertainty can be used in propagating the fixed-test configuration to the free-free actual configuration. The second main technique applied here is the usage of the limit-state formulation to quantify the final probabilistic parameters and to compare them with the requirements. These techniques are explored with a simple lumped spring-mass system and a simplified SLS model. When completed, it is anticipated that this requirements-based validation

  8. Tomorrow's energy needs require intelligent networks

    International Nuclear Information System (INIS)

    Bitsch, R.

    1998-01-01

    With the European wide move towards increased competition and greater deregulation of the energy industry, has come a thrust for greater efficiency and understanding customer needs and external constraints such as the environment. This, in turn, has led to solutions which take advantage of the tremendous developments in information technology and on-line control systems which are described in this paper. Topics include intelligent networks, decentralised energy supplies and decentralised energy management. (UK)

  9. Modeling Irrigation Networks for the Quantification of Potential Energy Recovering: A Case Study

    Directory of Open Access Journals (Sweden)

    Modesto Pérez-Sánchez

    2016-06-01

    Full Text Available Water irrigation systems are required to provide adequate pressure levels in any sort of network. Quite frequently, this requirement is achieved by using pressure reducing valves (PRVs. Nevertheless, the possibility of using hydraulic machines to recover energy instead of PRVs could reduce the energy footprint of the whole system. In this research, a new methodology is proposed to help water managers quantify the potential energy recovering of an irrigation water network with adequate conditions of topographies distribution. EPANET has been used to create a model based on probabilities of irrigation and flow distribution in real networks. Knowledge of the flows and pressures in the network is necessary to perform an analysis of economic viability. Using the proposed methodology, a case study has been analyzed in a typical Mediterranean region and the potential available energy has been estimated. The study quantifies the theoretical energy recoverable if hydraulic machines were installed in the network. Particularly, the maximum energy potentially recovered in the system has been estimated up to 188.23 MWh/year with a potential saving of non-renewable energy resources (coal and gas of CO2 137.4 t/year.

  10. Fast quantification of proton magnetic resonance spectroscopic imaging with artificial neural networks

    Science.gov (United States)

    Bhat, Himanshu; Sajja, Balasrinivasa Rao; Narayana, Ponnada A.

    2006-11-01

    Accurate quantification of the MRSI-observed regional distribution of metabolites involves relatively long processing times. This is particularly true in dealing with large amount of data that is typically acquired in multi-center clinical studies. To significantly shorten the processing time, an artificial neural network (ANN)-based approach was explored for quantifying the phase corrected (as opposed to magnitude) spectra. Specifically, in these studies radial basis function neural network (RBFNN) was used. This method was tested on simulated and normal human brain data acquired at 3T. The N-acetyl aspartate (NAA)/creatine (Cr), choline (Cho)/Cr, glutamate + glutamine (Glx)/Cr, and myo-inositol (mI)/Cr ratios in normal subjects were compared with the line fitting (LF) technique and jMRUI-AMARES analysis, and published values. The average NAA/Cr, Cho/Cr, Glx/Cr and mI/Cr ratios in normal controls were found to be 1.58 ± 0.13, 0.9 ± 0.08, 0.7 ± 0.17 and 0.42 ± 0.07, respectively. The corresponding ratios using the LF and jMRUI-AMARES methods were 1.6 ± 0.11, 0.95 ± 0.08, 0.78 ± 0.18, 0.49 ± 0.1 and 1.61 ± 0.15, 0.78 ± 0.07, 0.61 ± 0.18, 0.42 ± 0.13, respectively. These results agree with those published in literature. Bland-Altman analysis indicated an excellent agreement and minimal bias between the results obtained with RBFNN and other methods. The computational time for the current method was 15 s compared to approximately 10 min for the LF-based analysis.

  11. Reducing Wind Tunnel Data Requirements Using Neural Networks

    Science.gov (United States)

    Ross, James C.; Jorgenson, Charles C.; Norgaard, Magnus

    1997-01-01

    The use of neural networks to minimize the amount of data required to completely define the aerodynamic performance of a wind tunnel model is examined. The accuracy requirements for commercial wind tunnel test data are very severe and are difficult to reproduce using neural networks. For the current work, multiple input, single output networks were trained using a Levenberg-Marquardt algorithm for each of the aerodynamic coefficients. When applied to the aerodynamics of a 55% scale model of a U.S. Air Force/ NASA generic fighter configuration, this scheme provided accurate models of the lift, drag, and pitching-moment coefficients. Using only 50% of the data acquired during, the wind tunnel test, the trained neural network had a predictive accuracy equal to or better than the accuracy of the experimental measurements.

  12. Markets in real electric networks require reactive prices

    International Nuclear Information System (INIS)

    Hogan, W.W.

    1996-01-01

    Extending earlier seminal work, the author finds that locational spot price differences in an electric network provide the natural measure of the appropriate internodal transport charge. However, the problem of loop flow requires different economic intuition for interpreting the implications of spot pricing. The Direct Current model, which is the usual approximation for estimating spot prices, ignores reactive power effects; this approximation is best when thermal constraints create network congestion. However, when voltage constraints are problematic, the DC Load model is insufficient; a full AC Model is required to determine both real and reactive spot prices. 16 figs., 3 tabs., 22 refs

  13. Quantification of whey in fluid milk using confocal Raman microscopy and artificial neural network.

    Science.gov (United States)

    Alves da Rocha, Roney; Paiva, Igor Moura; Anjos, Virgílio; Furtado, Marco Antônio Moreira; Bell, Maria José Valenzuela

    2015-06-01

    In this work, we assessed the use of confocal Raman microscopy and artificial neural network as a practical method to assess and quantify adulteration of fluid milk by addition of whey. Milk samples with added whey (from 0 to 100%) were prepared, simulating different levels of fraudulent adulteration. All analyses were carried out by direct inspection at the light microscope after depositing drops from each sample on a microscope slide and drying them at room temperature. No pre- or posttreatment (e.g., sample preparation or spectral correction) was required in the analyses. Quantitative determination of adulteration was performed through a feed-forward artificial neural network (ANN). Different ANN configurations were evaluated based on their coefficient of determination (R2) and root mean square error values, which were criteria for selecting the best predictor model. In the selected model, we observed that data from both training and validation subsets presented R2>99.99%, indicating that the combination of confocal Raman microscopy and ANN is a rapid, simple, and efficient method to quantify milk adulteration by whey. Because sample preparation and postprocessing of spectra were not required, the method has potential applications in health surveillance and food quality monitoring. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Nuclear Physics Science Network Requirements Workshop, May 2008 - Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Ed., Brian L; Dart, Ed., Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-11-10

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In May 2008, ESnet and the Nuclear Physics (NP) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the NP Program Office. Most of the key DOE sites for NP related work will require significant increases in network bandwidth in the 5 year time frame. This includes roughly 40 Gbps for BNL, and 20 Gbps for NERSC. Total transatlantic requirements are on the order of 40 Gbps, and transpacific requirements are on the order of 30 Gbps. Other key sites are Vanderbilt University and MIT, which will need on the order of 20 Gbps bandwidth to support data transfers for the CMS Heavy Ion program. In addition to bandwidth requirements, the workshop emphasized several points in regard to science process and collaboration. One key point is the heavy reliance on Grid tools and infrastructure (both PKI and tools such as GridFTP) by the NP community. The reliance on Grid software is expected to increase in the future. Therefore, continued development and support of Grid software is very important to the NP science community. Another key finding is that scientific productivity is greatly enhanced by easy researcher-local access to instrument data. This is driving the creation of distributed repositories for instrument data at collaborating institutions, along with a corresponding increase in demand for network-based data transfers and the tools

  15. Network-Based Material Requirements Planning (NBMRP) in ...

    African Journals Online (AJOL)

    Network-Based Material Requirements Planning (NBMRP) in Product Development Project. ... International Journal of Development and Management Review ... To address the problems, this study evaluated the existing material planning practice, and formulated a NBMRP model out of the variables of the existing MRP and ...

  16. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Onofrejova, Lucia; Farkova, Marta [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Preisler, Jan, E-mail: preisler@chemi.muni.cz [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic)

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as {sup 18}O, {sup 15}N or {sup 13}C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  17. Implementing size-optimal discrete neural networks require analog circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-12-01

    This paper starts by overviewing results dealing with the approximation capabilities of neural networks, as well as bounds on the size of threshold gate circuits. Based on a constructive solution for Kolmogorov`s superpositions the authors show that implementing Boolean functions can be done using neurons having an identity transfer function. Because in this case the size of the network is minimized, it follows that size-optimal solutions for implementing Boolean functions can be obtained using analog circuitry. Conclusions and several comments on the required precision are ending the paper.

  18. FES Science Network Requirements - Report of the Fusion Energy Sciences Network Requirements Workshop Conducted March 13 and 14, 2008

    International Nuclear Information System (INIS)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    2008-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote

  19. FES Science Network Requirements - Report of the Fusion Energy Sciences Network Requirements Workshop Conducted March 13 and 14, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    2008-07-10

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote

  20. Real-time PCR assays for hepatitis B virus DNA quantification may require two different targets.

    Science.gov (United States)

    Liu, Chao; Chang, Le; Jia, Tingting; Guo, Fei; Zhang, Lu; Ji, Huimin; Zhao, Junpeng; Wang, Lunan

    2017-05-12

    Quantification Hepatitis B virus (HBV) DNA plays a critical role in the management of chronic HBV infections. However, HBV is a DNA virus with high levels of genetic variation, and drug-resistant mutations have emerged with the use of antiviral drugs. If a mutation caused a sequence mismatched in the primer or probe of a commercial DNA quantification kit, this would lead to an underestimation of the viral load of the sample. The aim of this study was to determine whether commercial kits, which use only one pair of primers and a single probe, accurately quantify the HBV DNA levels and to develop an improved duplex real-time PCR assay. We developed a new duplex real-time PCR assay that used two pairs of primers and two probes based on the conserved S and C regions of the HBV genome. We performed HBV DNA quantitative detection of HBV samples and compared the results of our duplex real-time PCR assays with the COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. The target region of the discordant sample was amplified, sequenced, and validated using plasmid. The results of the duplex real-time PCR were in good accordance with the commercial COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. We showed that two samples from Chinese HBV infections underestimated viral loads when quantified by the Roche kit because of a mismatch between the viral sequence and the reverse primer of the Roche kit. The HBV DNA levels of six samples were undervalued by duplex real-time PCR assays of the C region because of mutations in the primer of C region. We developed a new duplex real-time PCR assay, and the results of this assay were similar to the results of commercial kits. The HBV DNA level could be undervalued when using the COBAS TaqMan HBV Test version 2 for Chinese HBV infections owing to a mismatch with the primer/probe. A duplex real-time PCR assay based on the S and C regions could solve this problem to some extent.

  1. IX : An OS for datacenter applications with aggressive networking requirements

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The conventional wisdom is that aggressive networking requirements, such as high packet rates for small messages and microsecond-scale tail latency, are best addressed outside the kernel, in a user-level networking stack. We present IX, a dataplane operating system designed to support low-latency, high-throughput and high-connection count applications.  Like classic operating systems such as Linux, IX provides strong protection guarantees to the networking stack.  However, and unlike classic operating systems, IX is designed for the ground up to support applications with aggressive networking requirements on dense multi-core platforms with 10GbE and 40GbE Ethernet NICs.  IX outperforms Linux by an order of magnitude on micro benchmarks, and by up to 3.6x when running an unmodified memcached, a popular key-value store. The presentation is based on the joint work with Adam Belay, George Prekas, Ana Klimovic, Sam Grossman and Christos Kozyrakis, published at OSDI 2014; Best P...

  2. High Energy Physics and Nuclear Physics Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  3. BER Science Network Requirements Workshop -- July 26-27,2007

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L.; Dart, Eli

    2008-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In July 2007, ESnet and the Biological and Environmental Research (BER) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the BER Program Office. These included several large programs and facilities, including Atmospheric Radiation Measurement (ARM) Program and the ARM Climate Research Facility (ACRF), Bioinformatics and Life Sciences Programs, Climate Sciences Programs, the Environmental Molecular Sciences Laboratory at PNNL, the Joint Genome Institute (JGI). National Center for Atmospheric Research (NCAR) also participated in the workshop and contributed a section to this report due to the fact that a large distributed data repository for climate data will be established at NERSC, ORNL and NCAR, and this will have an effect on ESnet. Workshop participants were asked to codify their requirements in a 'case study' format, which summarizes the instruments and facilities necessary for the science and the process by which the science is done, with emphasis on the network services needed and the way in which the network is used. Participants were asked to consider three time scales in their case studies--the near term (immediately and up to 12 months in the future), the medium term (3-5 years in the future), and the long term (greater than 5 years in the future). In addition to achieving its goal of collecting and

  4. Towards tributyltin quantification in natural water at the Environmental Quality Standard level required by the Water Framework Directive.

    Science.gov (United States)

    Alasonati, Enrica; Fettig, Ina; Richter, Janine; Philipp, Rosemarie; Milačič, Radmila; Sčančar, Janez; Zuliani, Tea; Tunç, Murat; Bilsel, Mine; Gören, Ahmet Ceyhan; Fisicaro, Paola

    2016-11-01

    The European Union (EU) has included tributyltin (TBT) and its compounds in the list of priority water pollutants. Quality standards demanded by the EU Water Framework Directive (WFD) require determination of TBT at so low concentration level that chemical analysis is still difficult and further research is needed to improve the sensitivity, the accuracy and the precision of existing methodologies. Within the frame of a joint research project "Traceable measurements for monitoring critical pollutants under the European Water Framework Directive" in the European Metrology Research Programme (EMRP), four metrological and designated institutes have developed a primary method to quantify TBT in natural water using liquid-liquid extraction (LLE) and species-specific isotope dilution mass spectrometry (SSIDMS). The procedure has been validated at the Environmental Quality Standard (EQS) level (0.2ngL(-1) as cation) and at the WFD-required limit of quantification (LOQ) (0.06ngL(-1) as cation). The LOQ of the methodology was 0.06ngL(-1) and the average measurement uncertainty at the LOQ was 36%, which agreed with WFD requirements. The analytical difficulties of the method, namely the presence of TBT in blanks and the sources of measurement uncertainties, as well as the interlaboratory comparison results are discussed in detail. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Quantification of smoothing requirement for 3D optic flow calculation of volumetric images

    DEFF Research Database (Denmark)

    Bab-Hadiashar, Alireza; Tennakoon, Ruwan B.; de Bruijne, Marleen

    2013-01-01

    Complexities of dynamic volumetric imaging challenge the available computer vision techniques on a number of different fronts. This paper examines the relationship between the estimation accuracy and required amount of smoothness for a general solution from a robust statistics perspective. We show...... that a (surprisingly) small amount of local smoothing is required to satisfy both the necessary and sufficient conditions for accurate optic flow estimation. This notion is called 'just enough' smoothing, and its proper implementation has a profound effect on the preservation of local information in processing 3D...... dynamic scans. To demonstrate the effect of 'just enough' smoothing, a robust 3D optic flow method with quantized local smoothing is presented, and the effect of local smoothing on the accuracy of motion estimation in dynamic lung CT images is examined using both synthetic and real image sequences...

  6. A study of the minimum number of slices required for quantification of pulmonary emphysema by computed tomography

    International Nuclear Information System (INIS)

    Hitsuda, Yutaka; Igishi, Tadashi; Kawasaki, Yuji

    2000-01-01

    We attempted to determine the minimum number of slices required for quantification of overall emphysema by computed tomography (CT). Forty-nine patients underwent CT scanning with a 15-mm slice interval, and 13 to 18 slices per patient were obtained. The percentage of low attenuation area (LAA%) per slice was measured with a method that we reported on previously, utilizing a CT program and NIH image. The average LAA% values for 1, 2, 3, and 6 slices evenly spaced through the lungs [LAA% (1), LAA% (2), LAA% (3), and LAA% (6)] were compared with those for all slices [LAA% (All)]. The correlation coefficients for LAA% (1), LAA% (2), LAA% (3), and LAA% (6) with LAA% (All) were 0.961, 0.981, 0.993, and 0.997, respectively. Mean differences ±SD were -3.20±4.21%, -2.32±3.00, -0.20±1.84, and -0.16±1.26, respectively. From these results, we concluded that overall emphysema can be quantified by using at least three slices: one each of the upper, middle, and lower lung. (author)

  7. ICN_Atlas: Automated description and quantification of functional MRI activation patterns in the framework of intrinsic connectivity networks.

    Science.gov (United States)

    Kozák, Lajos R; van Graan, Louis André; Chaudhary, Umair J; Szabó, Ádám György; Lemieux, Louis

    2017-12-01

    Generally, the interpretation of functional MRI (fMRI) activation maps continues to rely on assessing their relationship to anatomical structures, mostly in a qualitative and often subjective way. Recently, the existence of persistent and stable brain networks of functional nature has been revealed; in particular these so-called intrinsic connectivity networks (ICNs) appear to link patterns of resting state and task-related state connectivity. These networks provide an opportunity of functionally-derived description and interpretation of fMRI maps, that may be especially important in cases where the maps are predominantly task-unrelated, such as studies of spontaneous brain activity e.g. in the case of seizure-related fMRI maps in epilepsy patients or sleep states. Here we present a new toolbox (ICN_Atlas) aimed at facilitating the interpretation of fMRI data in the context of ICN. More specifically, the new methodology was designed to describe fMRI maps in function-oriented, objective and quantitative way using a set of 15 metrics conceived to quantify the degree of 'engagement' of ICNs for any given fMRI-derived statistical map of interest. We demonstrate that the proposed framework provides a highly reliable quantification of fMRI activation maps using a publicly available longitudinal (test-retest) resting-state fMRI dataset. The utility of the ICN_Atlas is also illustrated on a parametric task-modulation fMRI dataset, and on a dataset of a patient who had repeated seizures during resting-state fMRI, confirmed on simultaneously recorded EEG. The proposed ICN_Atlas toolbox is freely available for download at http://icnatlas.com and at http://www.nitrc.org for researchers to use in their fMRI investigations. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Validation and quantification of uncertainty in coupled climate models using network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bracco, Annalisa [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-08-10

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets

  9. Data-driven quantification of the robustness and sensitivity of cell signaling networks

    International Nuclear Information System (INIS)

    Mukherjee, Sayak; Seok, Sang-Cheol; Vieland, Veronica J; Das, Jayajit

    2013-01-01

    Robustness and sensitivity of responses generated by cell signaling networks has been associated with survival and evolvability of organisms. However, existing methods analyzing robustness and sensitivity of signaling networks ignore the experimentally observed cell-to-cell variations of protein abundances and cell functions or contain ad hoc assumptions. We propose and apply a data-driven maximum entropy based method to quantify robustness and sensitivity of Escherichia coli (E. coli) chemotaxis signaling network. Our analysis correctly rank orders different models of E. coli chemotaxis based on their robustness and suggests that parameters regulating cell signaling are evolutionary selected to vary in individual cells according to their abilities to perturb cell functions. Furthermore, predictions from our approach regarding distribution of protein abundances and properties of chemotactic responses in individual cells based on cell population averaged data are in excellent agreement with their experimental counterparts. Our approach is general and can be used to evaluate robustness as well as generate predictions of single cell properties based on population averaged experimental data in a wide range of cell signaling systems. (paper)

  10. Energy efficiency in future wireless networks: cognitive radio standardization requirements

    CSIR Research Space (South Africa)

    Masonta, M

    2012-09-01

    Full Text Available Energy consumption of mobile and wireless networks and devices is significant, indirectly increasing greenhouse gas emissions and energy costs for operators. Cognitive radio (CR) solutions can save energy for such networks and devices; moreover...

  11. Quantification of motor network dynamics in Parkinson's disease by means of landscape and flux theory.

    Directory of Open Access Journals (Sweden)

    Han Yan

    Full Text Available The basal ganglia neural circuit plays an important role in motor control. Despite the significant efforts, the understanding of the principles and underlying mechanisms of this modulatory circuit and the emergence of abnormal synchronized oscillations in movement disorders is still challenging. Dopamine loss has been proved to be responsible for Parkinson's disease. We quantitatively described the dynamics of the basal ganglia-thalamo-cortical circuit in Parkinson's disease in terms of the emergence of both abnormal firing rates and firing patterns in the circuit. We developed a potential landscape and flux framework for exploring the modulatory circuit. The driving force of the circuit can be decomposed into a gradient of the potential, which is associated with the steady-state probability distributions, and the curl probability flux term. We uncovered the underlying potential landscape as a Mexican hat-shape closed ring valley where abnormal oscillations emerge due to dopamine depletion. We quantified the global stability of the network through the topography of the landscape in terms of the barrier height, which is defined as the potential difference between the maximum potential inside the ring and the minimum potential along the ring. Both a higher barrier and a larger flux originated from detailed balance breaking result in more stable oscillations. Meanwhile, more energy is consumed to support the increasing flux. Global sensitivity analysis on the landscape topography and flux indicates how changes in underlying neural network regulatory wirings and external inputs influence the dynamics of the system. We validated two of the main hypotheses(direct inhibition hypothesis and output activation hypothesis on the therapeutic mechanism of deep brain stimulation (DBS. We found GPe appears to be another effective stimulated target for DBS besides GPi and STN. Our approach provides a general way to quantitatively explore neural networks and may

  12. Wrist sensor-based tremor severity quantification in Parkinson's disease using convolutional neural network.

    Science.gov (United States)

    Kim, Han Byul; Lee, Woong Woo; Kim, Aryun; Lee, Hong Ji; Park, Hye Young; Jeon, Hyo Seon; Kim, Sang Kyong; Jeon, Beomseok; Park, Kwang S

    2018-04-01

    Tremor is a commonly observed symptom in patients of Parkinson's disease (PD), and accurate measurement of tremor severity is essential in prescribing appropriate treatment to relieve its symptoms. We propose a tremor assessment system based on the use of a convolutional neural network (CNN) to differentiate the severity of symptoms as measured in data collected from a wearable device. Tremor signals were recorded from 92 PD patients using a custom-developed device (SNUMAP) equipped with an accelerometer and gyroscope mounted on a wrist module. Neurologists assessed the tremor symptoms on the Unified Parkinson's Disease Rating Scale (UPDRS) from simultaneously recorded video footages. The measured data were transformed into the frequency domain and used to construct a two-dimensional image for training the network, and the CNN model was trained by convolving tremor signal images with kernels. The proposed CNN architecture was compared to previously studied machine learning algorithms and found to outperform them (accuracy = 0.85, linear weighted kappa = 0.85). More precise monitoring of PD tremor symptoms in daily life could be possible using our proposed method. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Occipital and occipital "plus" epilepsies: A study of involved epileptogenic networks through SEEG quantification.

    Science.gov (United States)

    Marchi, Angela; Bonini, Francesca; Lagarde, Stanislas; McGonigal, Aileen; Gavaret, Martine; Scavarda, Didier; Carron, Romain; Aubert, Sandrine; Villeneuve, Nathalie; Médina Villalon, Samuel; Bénar, Christian; Trebuchon, Agnes; Bartolomei, Fabrice

    2016-09-01

    Compared with temporal or frontal lobe epilepsies, the occipital lobe epilepsies (OLE) remain poorly characterized. In this study, we aimed at classifying the ictal networks involving OLE and investigated clinical features of the OLE network subtypes. We studied 194 seizures from 29 consecutive patients presenting with OLE and investigated by stereoelectroencephalography (SEEG). Epileptogenicity of occipital and extraoccipital regions was quantified according to the 'epileptogenicity index' (EI) method. We found that 79% of patients showed widespread epileptogenic zone organization, involving parietal or temporal regions in addition to the occipital lobe. Two main groups of epileptogenic zone organization within occipital lobe seizures were identified: a pure occipital group and an occipital "plus" group, the latter including two further subgroups, occipitotemporal and occipitoparietal. In 29% of patients, the epileptogenic zone was found to have a bilateral organization. The most epileptogenic structure was the fusiform gyrus (mean EI: 0.53). Surgery was proposed in 18/29 patients, leading to seizure freedom in 55% (Engel Class I). Results suggest that, in patient candidates for surgery, the majority of cases are characterized by complex organization of the EZ, corresponding to the occipital plus group. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Verifying cell loss requirements in high-speed communication networks

    Directory of Open Access Journals (Sweden)

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  15. Technical requirements of a social networking platform for senior citizens.

    Science.gov (United States)

    Demski, Hans; Hildebrand, Claudia; López Bolós, José; Tiedge, Winfried; Wengel, Stefanie; O Broin, Daire; Palmer, Ross

    2012-01-01

    Feeling an integrative part of a social community adds to the quality of life. Elderly people who find it more difficult to actively join activities are often threatened by isolation. Social networking can enable communication and sharing activities makes it easier to set up and maintain contacts. This paper describes the development of a social networking platform and activities like gaming and exergaming all of which aim to facilitate social interaction. It reports on the particular challenges that need to be addressed when creating a social networking platform specially designed to meet the needs of the elderly.

  16. A survey on social networks to determine requirements for Learning Networks for professional development of university staff

    NARCIS (Netherlands)

    Brouns, Francis; Berlanga, Adriana; Fetter, Sibren; Bitter-Rijpkema, Marlies; Van Bruggen, Jan; Sloep, Peter

    2009-01-01

    Brouns, F., Berlanga, A. J., Fetter, S., Bitter-Rijpkema, M. E., Van Bruggen, J. M., & Sloep, P. B. (2011). A survey on social networks to determine requirements for Learning Networks for professional development of university staff. International Journal of Web Based Communities, 7(3), 298-311.

  17. Requirements and Algorithms for Cooperation of Heterogeneous Radio Access Networks

    DEFF Research Database (Denmark)

    Mihovska, Albena D.; Tragos, Elias; Mino, Emilio

    2009-01-01

    systems.The RRM mechanisms are evaluated for the scenario of intra-RAN and inter-RAN user mobility. The RRM framework incorporates as novelty improved triggering mechanisms, a network-controlledmobility management scheme with policy enforcement on different levels in the RANarchitecture, and a distributed...

  18. Requirements for data integration platforms in biomedical research networks: a reference model.

    Science.gov (United States)

    Ganzinger, Matthias; Knaup, Petra

    2015-01-01

    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.

  19. Implementing size-optimal discrete neural networks requires analog circuitry

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.

    1998-03-01

    Neural networks (NNs) have been experimentally shown to be quite effective in many applications. This success has led researchers to undertake a rigorous analysis of the mathematical properties that enable them to perform so well. It has generated two directions of research: (i) to find existence/constructive proofs for what is now known as the universal approximation problem; (ii) to find tight bounds on the size needed by the approximation problem (or some particular cases). The paper will focus on both aspects, for the particular case when the functions to be implemented are Boolean.

  20. Quantification of groundwater infiltration and surface water inflows in urban sewer networks based on a multiple model approach.

    Science.gov (United States)

    Karpf, Christian; Krebs, Peter

    2011-05-01

    The management of sewer systems requires information about discharge and variability of typical wastewater sources in urban catchments. Especially the infiltration of groundwater and the inflow of surface water (I/I) are important for making decisions about the rehabilitation and operation of sewer networks. This paper presents a methodology to identify I/I and estimate its quantity. For each flow fraction in sewer networks, an individual model approach is formulated whose parameters are optimised by the method of least squares. This method was applied to estimate the contributions to the wastewater flow in the sewer system of the City of Dresden (Germany), where data availability is good. Absolute flows of I/I and their temporal variations are estimated. Further information on the characteristics of infiltration is gained by clustering and grouping sewer pipes according to the attributes construction year and groundwater influence and relating these resulting classes to infiltration behaviour. Further, it is shown that condition classes based on CCTV-data can be used to estimate the infiltration potential of sewer pipes. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. 77 FR 27381 - Financial Crimes Enforcement Network: Customer Due Diligence Requirements for Financial...

    Science.gov (United States)

    2012-05-10

    ...-AB15 Financial Crimes Enforcement Network: Customer Due Diligence Requirements for Financial... concerning customer due diligence requirements for financial institutions. DATES: Written comments on the... customer due diligence requirements for financial institutions.\\1\\ FinCEN received several comments on the...

  2. Requirements for advanced decision support tools in future distribution network planning

    NARCIS (Netherlands)

    Grond, M.O.W.; Morren, J.; Slootweg, J.G.

    2013-01-01

    This paper describes the need and requirements for advanced decision support tools in future network planning from a distribution network operator perspective. The existing tools will no longer be satisfactory for future application due to present developments in the electricity sector that increase

  3. Towards requirements elicitation in service-oriented business networks using value and goal modelling

    NARCIS (Netherlands)

    Mantovaneli Pessoa, Rodrigo; van Sinderen, Marten J.; Quartel, Dick; Shishkov, Boris; Cordeiro, J.; Ranchordas, A.

    2009-01-01

    Due to the contemporary trends towards increased focus on core competences and outsourcing of non-core activities, enterprises are forming strategic alliances and building business networks. This often requires cross enterprise interoperability and integration of their information systems, leading

  4. Optical terminal requirements for aeronautical multi-hop networks

    Science.gov (United States)

    Karras, Kimon; Marinos, Dimitris; Kouros, Pavlos

    2008-08-01

    High speed free space optical data links are currently finding limited use in military aircraft; however the technology is slowly starting to diffuse to civilian applications, where they could be used to provide a high bandwidth connection. However there are several issues that have to be resolved before the technology is ready for deployment. An important part of these are physical layer issues which deal with the ability to transmit and receive the optical signal reliably, as well as mechanical issues which focus on the construction of high performance, small and lightweight terminals for the optical transceiver. The later in conjunction with the cost of such a terminal create a significant limitation on the number of such equipment that any aircraft might carry on board. This paper attempts to evaluate how various such parameters affect the capability of an aircraft to take part in and help form a mesh network. The study was conducted by modeling the aircraft into a custom built SystemC based simulator tool and evaluating the connectivity achieved for varying several parameters, such as the pointing and acquisition time of the terminal and the number of terminals on board.

  5. A spatially distributed isotope sampling network in a snow-dominated catchment for the quantification of snow meltwater

    Science.gov (United States)

    Rücker, Andrea; Boss, Stefan; Von Freyberg, Jana; Zappa, Massimiliano; Kirchner, James

    2017-04-01

    In mountainous catchments with seasonal snowpacks, river discharge in downstream valleys is largely sustained by snowmelt in spring and summer. Future climate warming will likely reduce snow volumes and lead to earlier and faster snowmelt in such catchments. This, in turn, may increase the risk of summer low flows and hydrological droughts. Improved runoff predictions are thus required in order to adapt water management to future climatic conditions and to assure the availability of fresh water throughout the year. However, a detailed understanding of the hydrological processes is crucial to obtain robust predictions of river streamflow. This in turn requires fingerprinting source areas of streamflow, tracing water flow pathways, and measuring timescales of catchment storage, using tracers such as stable water isotopes (18O, 2H). For this reason, we have established an isotope sampling network in the Alptal, a snowmelt-dominated catchment (46.4 km2) in Central-Switzerland, as part of the SREP-Drought project (Snow Resources and the Early Prediction of hydrological DROUGHT in mountainous streams). Precipitation and snow cores are analyzed for their isotopic signature at daily or weekly intervals. Three-week bulk samples of precipitation are also collected on a transect along the Alptal valley bottom, and along an elevational transect perpendicular to the Alptal valley axis. Streamwater samples are taken at the catchment outlet as well as in two small nested sub-catchments (automatic snow lysimeter system was developed, which also facilitates real-time monitoring of snowmelt events, system status and environmental conditions (air and soil temperature). Three lysimeter systems were installed within the catchment, in one forested site and two open field sites at different elevations, and have been operational since November 2016. We will present the isotope time series from our regular sampling network, as well as initial results from our snowmelt lysimeter sites. Our

  6. Mean precipitation estimation, rain gauge network evaluation and quantification of the hydrologic balance in the River Quito basin in Choco, state of Colombia

    International Nuclear Information System (INIS)

    Cordoba, Samir; Zea, Jorge A; Murillo, W

    2006-01-01

    In this work the calculation of the average precipitation in the Quito River basin, state of Choco, Colombia, is presents through diverse techniques, among which are those suggested by Thiessen and those based on the isohyets analysis, in order to select the one appropriate to quantification of rainwater available to the basin. Also included is an estimation of the error with which the average precipitation in the zone studied is fraught when measured, by means of the methodology proposed by Gandin (1970) and Kagan (WMO, 1966), which at the same time allows to evaluate the representativeness of each one of the stations that make up the rain gauge network in the area. The study concludes with a calculation of the hydrologic balance for the Quito river basin based on the pilot procedure suggested in the UNESCO publication on the study of the South America hydrologic balance, from which the great contribution of rainfall to a greatly enhanced run-off may be appreciated

  7. Group Centric Networking: Addressing Information Sharing Requirements at the Tactical Edge

    Science.gov (United States)

    2016-04-10

    Group Centric Networking: Addressing Information Sharing Requirements at the Tactical Edge Bow-Nan Cheng, Greg Kuperman, Patricia Deutsch, Logan...been a large push in the U.S. Department of Defense to move to an all Internet Protocol (IP) infrastructure, particularly on the tactical edge . IP and...lossy links, and scaling to large numbers of users. Unfortunately, these are the exact conditions military tactical edge networks must operate within

  8. Quantification of Environmental Flow Requirements to Support Ecosystem Services of Oasis Areas: A Case Study in Tarim Basin, Northwest China

    Directory of Open Access Journals (Sweden)

    Jie Xue

    2015-10-01

    Full Text Available Recently, a wide range of quantitative research on the identification of environmental flow requirements (EFRs has been conducted. However, little focus is given to EFRs to maintain multiple ecosystem services in oasis areas. The present study quantifies the EFRs in oasis areas of Tarim Basin, Xinjiang, Northwest China on the basis of three ecosystem services: (1 maintenance of riverine ecosystem health, (2 assurance of the stability of oasis–desert ecotone and riparian (Tugai forests, and (3 restoration of oasis–desert ecotone groundwater. The identified consumptive and non-consumptive water requirements are used to quantify and determine the EFRs in Qira oasis by employing the summation and compatibility rules (maximum principle. Results indicate that the annual maximum, medium, and minimum EFRs are 0.752 × 108, 0.619 × 108, and 0.516 × 108 m3, respectively, which account for 58.75%, 48.36%, and 40.29% of the natural river runoff. The months between April and October are identified as the most important periods to maintain the EFRs. Moreover, the water requirement for groundwater restoration of the oasis–desert ecotone accounts for a large proportion, representing 48.27%, 42.32%, and 37.03% of the total EFRs at maximum, medium, and minimum levels, respectively. Therefore, to allocate the integrated EFRs, focus should be placed on the water demand of the desert vegetation’s groundwater restoration, which is crucial for maintaining desert vegetation to prevent sandstorms and soil erosion. This work provides a reference to quantify the EFRs of oasis areas in arid regions.

  9. Key Technologies in the Context of Future Networks: Operational and Management Requirements

    Directory of Open Access Journals (Sweden)

    Lorena Isabel Barona López

    2016-12-01

    Full Text Available The concept of Future Networks is based on the premise that current infrastructures require enhanced control, service customization, self-organization and self-management capabilities to meet the new needs in a connected society, especially of mobile users. In order to provide a high-performance mobile system, three main fields must be improved: radio, network, and operation and management. In particular, operation and management capabilities are intended to enable business agility and operational sustainability, where the addition of new services does not imply an excessive increase in capital or operational expenditures. In this context, a set of key-enabled technologies have emerged in order to aid in this field. Concepts such as Software Defined Network (SDN, Network Function Virtualization (NFV and Self-Organized Networks (SON are pushing traditional systems towards the next 5G network generation.This paper presents an overview of the current status of these promising technologies and ongoing works to fulfill the operational and management requirements of mobile infrastructures. This work also details the use cases and the challenges, taking into account not only SDN, NFV, cloud computing and SON but also other paradigms.

  10. Monitoring groundwater: optimising networks to take account of cost effectiveness, legal requirements and enforcement realities

    Science.gov (United States)

    Allan, A.; Spray, C.

    2013-12-01

    The quality of monitoring networks and modeling in environmental regulation is increasingly important. This is particularly true with respect to groundwater management, where data may be limited, physical processes poorly understood and timescales very long. The powers of regulators may be fatally undermined by poor or non-existent networks, primarily through mismatches between the legal standards that networks must meet, actual capacity and the evidentiary standards of courts. For example, in the second and third implementation reports on the Water Framework Directive, the European Commission drew attention to gaps in the standards of mandatory monitoring networks, where the standard did not meet the reality. In that context, groundwater monitoring networks should provide a reliable picture of groundwater levels and a ';coherent and comprehensive' overview of chemical status so that anthropogenically influenced long-term upward trends in pollutant levels can be tracked. Confidence in this overview should be such that 'the uncertainty from the monitoring process should not add significantly to the uncertainty of controlling the risk', with densities being sufficient to allow assessment of the impact of abstractions and discharges on levels in groundwater bodies at risk. The fact that the legal requirements for the quality of monitoring networks are set out in very vague terms highlights the many variables that can influence the design of monitoring networks. However, the quality of a monitoring network as part of the armory of environmental regulators is potentially of crucial importance. If, as part of enforcement proceedings, a regulator takes an offender to court and relies on conclusions derived from monitoring networks, a defendant may be entitled to question those conclusions. If the credibility, reliability or relevance of a monitoring network can be undermined, because it is too sparse, for example, this could have dramatic consequences on the ability of a

  11. Nuclear Physics Science Network Requirements Workshop, May 6 and 7, 2008. Final Report

    International Nuclear Information System (INIS)

    Tierney, Ed. Brian L; Dart, Ed. Eli; Carlson, Rich; Dattoria, Vince; Ernest, Michael; Hitchcock, Daniel; Johnston, William; Kowalski, Andy; Lauret, Jerome; Maguire, Charles; Olson, Douglas; Purschke, Martin; Rai, Gulshan; Watson, Chip; Vale, Carla

    2008-01-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In May 2008, ESnet and the Nuclear Physics (NP) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the NP Program Office. Most of the key DOE sites for NP related work will require significant increases in network bandwidth in the 5 year time frame. This includes roughly 40 Gbps for BNL, and 20 Gbps for NERSC. Total transatlantic requirements are on the order of 40 Gbps, and transpacific requirements are on the order of 30 Gbps. Other key sites are Vanderbilt University and MIT, which will need on the order of 20 Gbps bandwidth to support data transfers for the CMS Heavy Ion program. In addition to bandwidth requirements, the workshop emphasized several points in regard to science process and collaboration. One key point is the heavy reliance on Grid tools and infrastructure (both PKI and tools such as GridFTP) by the NP community. The reliance on Grid software is expected to increase in the future. Therefore, continued development and support of Grid software is very important to the NP science community. Another key finding is that scientific productivity is greatly enhanced by easy researcher-local access to instrument data. This is driving the creation of distributed repositories for instrument data at collaborating institutions, along with a corresponding increase in demand for network-based data transfers and the tools

  12. Quantification of the impact of a confounding variable on functional connectivity confirms anti-correlated networks in the resting-state.

    Science.gov (United States)

    Carbonell, F; Bellec, P; Shmuel, A

    2014-02-01

    The effect of regressing out the global average signal (GAS) in resting state fMRI data has become a concern for interpreting functional connectivity analyses. It is not clear whether the reported anti-correlations between the Default Mode and the Dorsal Attention Networks are intrinsic to the brain, or are artificially created by regressing out the GAS. Here we introduce a concept, Impact of the Global Average on Functional Connectivity (IGAFC), for quantifying the sensitivity of seed-based correlation analyses to the regression of the GAS. This voxel-wise IGAFC index is defined as the product of two correlation coefficients: the correlation between the GAS and the fMRI time course of a voxel, times the correlation between the GAS and the seed time course. This definition enables the calculation of a threshold at which the impact of regressing-out the GAS would be large enough to introduce spurious negative correlations. It also yields a post-hoc impact correction procedure via thresholding, which eliminates spurious correlations introduced by regressing out the GAS. In addition, we introduce an Artificial Negative Correlation Index (ANCI), defined as the absolute difference between the IGAFC index and the impact threshold. The ANCI allows a graded confidence scale for ranking voxels according to their likelihood of showing artificial correlations. By applying this method, we observed regions in the Default Mode and Dorsal Attention Networks that were anti-correlated. These findings confirm that the previously reported negative correlations between the Dorsal Attention and Default Mode Networks are intrinsic to the brain and not the result of statistical manipulations. Our proposed quantification of the impact that a confound may have on functional connectivity can be generalized to global effect estimators other than the GAS. It can be readily applied to other confounds, such as systemic physiological or head movement interferences, in order to quantify their

  13. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  14. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  15. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    Science.gov (United States)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  16. Requirements of the integration of renewable energy into network charge regulation. Proposals for the further development of the network charge system. Final report

    International Nuclear Information System (INIS)

    Friedrichsen, Nele; Klobasa, Marian; Marwitz, Simon; Hilpert, Johannes; Sailer, Frank

    2016-01-01

    In this project we analyzed options to advance the network tariff system to support the German energy transition. A power system with high shares of renewables, requires more flexibility of supply and demand than the traditional system based on centralized, fossil power plants. Further, the power networks need to be adjusted and expanded. The transformation should aim at system efficiency i.e. look at both generation and network development. Network tariffs allocate the network cost towards network users. They also should provide incentives, e.g. to reduce peak load in periods of network congestion. Inappropriate network tariffs can hinder the provision of flexibility and thereby become a barrier towards system integration of renewable. Against this background, this report presents a systematic review of the German network tariff system and a discussion of several options to adapt the network tarif system in order to support the energy transition. The following aspects are analyzed: An adjustment of the privileges for industrial users to increase potential network benefits and reduce barriers towards a more market oriented behaviour. The payments for avoided network charges to distributed generation, that do not reflect cost reality in distribution networks anymore. Uniform transmission network tariffs as an option for a more appropriate allocation of cost associated with the energy transition. Increased standing fees in low voltage networks as an option to increase the cost-contribution of users with self-generation to network financing. Generator tariffs, to allocate a share of network cost to generators and provide incentives for network oriented location choice and/or feed-in.

  17. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Impact of Distributed Generation Grid Code Requirements on Islanding Detection in LV Networks

    Directory of Open Access Journals (Sweden)

    Fabio Bignucolo

    2017-01-01

    Full Text Available The recent growing diffusion of dispersed generation in low voltage (LV distribution networks is entailing new rules to make local generators participate in network stability. Consequently, national and international grid codes, which define the connection rules for stability and safety of electrical power systems, have been updated requiring distributed generators and electrical storage systems to supply stabilizing contributions. In this scenario, specific attention to the uncontrolled islanding issue has to be addressed since currently required anti-islanding protection systems, based on relays locally measuring voltage and frequency, could no longer be suitable. In this paper, the effects on the interface protection performance of different LV generators’ stabilizing functions are analysed. The study takes into account existing requirements, such as the generators’ active power regulation (according to the measured frequency and reactive power regulation (depending on the local measured voltage. In addition, the paper focuses on other stabilizing features under discussion, derived from the medium voltage (MV distribution network grid codes or proposed in the literature, such as fast voltage support (FVS and inertia emulation. Stabilizing functions have been reproduced in the DIgSILENT PowerFactory 2016 software environment, making use of its native programming language. Later, they are tested both alone and together, aiming to obtain a comprehensive analysis on their impact on the anti-islanding protection effectiveness. Through dynamic simulations in several network scenarios the paper demonstrates the detrimental impact that such stabilizing regulations may have on loss-of-main protection effectiveness, leading to an increased risk of unintentional islanding.

  19. The intermediate filament network protein, vimentin, is required for parvoviral infection

    Energy Technology Data Exchange (ETDEWEB)

    Fay, Nikta; Panté, Nelly, E-mail: pante@zoology.ubc.ca

    2013-09-15

    Intermediate filaments (IFs) have recently been shown to serve novel roles during infection by many viruses. Here we have begun to study the role of IFs during the early steps of infection by the parvovirus minute virus of mice (MVM). We found that during early infection with MVM, after endosomal escape, the vimentin IF network was considerably altered, yielding collapsed immunofluorescence staining near the nuclear periphery. Furthermore, we found that vimentin plays an important role in the life cycle of MVM. The number of cells, which successfully replicated MVM, was reduced in infected cells in which the vimentin network was genetically or pharmacologically modified; viral endocytosis, however, remained unaltered. Perinuclear accumulation of MVM-containing vesicles was reduced in cells lacking vimentin. Our data suggests that vimentin is required for the MVM life cycle, presenting possibly a dual role: (1) following MVM escape from endosomes and (2) during endosomal trafficking of MVM. - Highlights: • MVM infection changes the distribution of the vimentin network to perinuclear regions. • Disrupting the vimentin network with acrylamide decreases MVM replication. • MVM replication is significantly reduced in vimentin-null cells. • Distribution of MVM-containing vesicles is affected in MVM infected vimentin-null cells.

  20. The intermediate filament network protein, vimentin, is required for parvoviral infection

    International Nuclear Information System (INIS)

    Fay, Nikta; Panté, Nelly

    2013-01-01

    Intermediate filaments (IFs) have recently been shown to serve novel roles during infection by many viruses. Here we have begun to study the role of IFs during the early steps of infection by the parvovirus minute virus of mice (MVM). We found that during early infection with MVM, after endosomal escape, the vimentin IF network was considerably altered, yielding collapsed immunofluorescence staining near the nuclear periphery. Furthermore, we found that vimentin plays an important role in the life cycle of MVM. The number of cells, which successfully replicated MVM, was reduced in infected cells in which the vimentin network was genetically or pharmacologically modified; viral endocytosis, however, remained unaltered. Perinuclear accumulation of MVM-containing vesicles was reduced in cells lacking vimentin. Our data suggests that vimentin is required for the MVM life cycle, presenting possibly a dual role: (1) following MVM escape from endosomes and (2) during endosomal trafficking of MVM. - Highlights: • MVM infection changes the distribution of the vimentin network to perinuclear regions. • Disrupting the vimentin network with acrylamide decreases MVM replication. • MVM replication is significantly reduced in vimentin-null cells. • Distribution of MVM-containing vesicles is affected in MVM infected vimentin-null cells

  1. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    Science.gov (United States)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  2. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Science.gov (United States)

    Čeko, Marta; Gracely, John L; Fitzcharles, Mary-Ann; Seminowicz, David A; Schweinhardt, Petra; Bushnell, M Catherine

    2015-08-19

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or "negative" [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient. We studied the

  3. Network and system diagrams revisited: Satisfying CEA requirements for causality analysis

    International Nuclear Information System (INIS)

    Perdicoulis, Anastassios; Piper, Jake

    2008-01-01

    Published guidelines for Cumulative Effects Assessment (CEA) have called for the identification of cause-and-effect relationships, or causality, challenging researchers to identify methods that can possibly meet CEA's specific requirements. Together with an outline of these requirements from CEA key literature, the various definitions of cumulative effects point to the direction of a method for causality analysis that is visually-oriented and qualitative. This article consequently revisits network and system diagrams, resolves their reported shortcomings, and extends their capabilities with causal loop diagramming methodology. The application of the resulting composite causality analysis method to three Environmental Impact Assessment (EIA) case studies appears to satisfy the specific requirements of CEA regarding causality. Three 'moments' are envisaged for the use of the proposed method: during the scoping stage, during the assessment process, and during the stakeholder participation process

  4. Direct quantification of lipopeptide biosurfactants in biological samples via HPLC and UPLC-MS requires sample modification with an organic solvent.

    Science.gov (United States)

    Biniarz, Piotr; Łukaszewicz, Marcin

    2017-06-01

    The rapid and accurate quantification of biosurfactants in biological samples is challenging. In contrast to the orcinol method for rhamnolipids, no simple biochemical method is available for the rapid quantification of lipopeptides. Various liquid chromatography (LC) methods are promising tools for relatively fast and exact quantification of lipopeptides. Here, we report strategies for the quantification of the lipopeptides pseudofactin and surfactin in bacterial cultures using different high- (HPLC) and ultra-performance liquid chromatography (UPLC) systems. We tested three strategies for sample pretreatment prior to LC analysis. In direct analysis (DA), bacterial cultures were injected directly and analyzed via LC. As a modification, we diluted the samples with methanol and detected an increase in lipopeptide recovery in the presence of methanol. Therefore, we suggest this simple modification as a tool for increasing the accuracy of LC methods. We also tested freeze-drying followed by solvent extraction (FDSE) as an alternative for the analysis of "heavy" samples. In FDSE, the bacterial cultures were freeze-dried, and the resulting powder was extracted with different solvents. Then, the organic extracts were analyzed via LC. Here, we determined the influence of the extracting solvent on lipopeptide recovery. HPLC methods allowed us to quantify pseudofactin and surfactin with run times of 15 and 20 min per sample, respectively, whereas UPLC quantification was as fast as 4 and 5.5 min per sample, respectively. Our methods provide highly accurate measurements and high recovery levels for lipopeptides. At the same time, UPLC-MS provides the possibility to identify lipopeptides and their structural isoforms.

  5. Improving mine-mill water network design by reducing water and energy requirements

    Energy Technology Data Exchange (ETDEWEB)

    Gunson, A.J.; Klein, B.; Veiga, M. [British Columbia Univ., Vancouver, BC (Canada). Norman B. Keevil Inst. of Mining Engineering

    2010-07-01

    Mining is an energy-intensive industry, and most processing mills use wet processes to separate minerals from ore. This paper discussed water reduction, reuse and recycling options for a mining and mill operation network. A mine water network design was then proposed in order to identify and reduce water and system energy requirements. This included (1) a description of site water balance, (2) a description of potential water sources, (3) a description of water consumers, (4) the construction of energy requirement matrices, and (5) the use of linear programming to reduce energy requirements. The design was used to determine a site water balance as well as to specify major water consumers during mining and mill processes. Potential water supply combinations, water metering technologies, and recycling options were evaluated in order to identify the most efficient energy and water use combinations. The method was used to highlight potential energy savings from the integration of heating and cooling systems with plant water systems. 43 refs., 4 tabs., 3 figs.

  6. Is a Responsive Default Mode Network Required for Successful Working Memory Task Performance?

    Science.gov (United States)

    Čeko, Marta; Gracely, John L.; Fitzcharles, Mary-Ann; Seminowicz, David A.; Schweinhardt, Petra

    2015-01-01

    In studies of cognitive processing using tasks with externally directed attention, regions showing increased (external-task-positive) and decreased or “negative” [default-mode network (DMN)] fMRI responses during task performance are dynamically responsive to increasing task difficulty. Responsiveness (modulation of fMRI signal by increasing load) has been linked directly to successful cognitive task performance in external-task-positive regions but not in DMN regions. To investigate whether a responsive DMN is required for successful cognitive performance, we compared healthy human subjects (n = 23) with individuals shown to have decreased DMN engagement (chronic pain patients, n = 28). Subjects performed a multilevel working-memory task (N-back) during fMRI. If a responsive DMN is required for successful performance, patients having reduced DMN responsiveness should show worsened performance; if performance is not reduced, their brains should show compensatory activation in external-task-positive regions or elsewhere. All subjects showed decreased accuracy and increased reaction times with increasing task level, with no significant group differences on either measure at any level. Patients had significantly reduced negative fMRI response (deactivation) of DMN regions (posterior cingulate/precuneus, medial prefrontal cortex). Controls showed expected modulation of DMN deactivation with increasing task difficulty. Patients showed significantly reduced modulation of DMN deactivation by task difficulty, despite their successful task performance. We found no evidence of compensatory neural recruitment in external-task-positive regions or elsewhere. Individual responsiveness of the external-task-positive ventrolateral prefrontal cortex, but not of DMN regions, correlated with task accuracy. These findings suggest that a responsive DMN may not be required for successful cognitive performance; a responsive external-task-positive network may be sufficient

  7. Real-Time, Interactive Echocardiography Over High-Speed Networks: Feasibility and Functional Requirements

    Science.gov (United States)

    Bobinsky, Eric A.

    1998-01-01

    Real-time, Interactive Echocardiography Over High Speed Networks: Feasibility and Functional Requirements is an experiment in advanced telemedicine being conducted jointly by the NASA Lewis Research Center, the NASA Ames Research Center, and the Cleveland Clinic Foundation. In this project, a patient undergoes an echocardiographic examination in Cleveland while being diagnosed remotely by a cardiologist in California viewing a real-time display of echocardiographic video images transmitted over the broadband NASA Research and Education Network (NREN). The remote cardiologist interactively guides the sonographer administering the procedure through a two-way voice link between the two sites. Echocardiography is a noninvasive medical technique that applies ultrasound imaging to the heart, providing a "motion picture" of the heart in action. Normally, echocardiographic examinations are performed by a sonographer and cardiologist who are located in the same medical facility as the patient. The goal of telemedicine is to allow medical specialists to examine patients located elsewhere, typically in remote or medically underserved geographic areas. For example, a small, rural clinic might have access to an echocardiograph machine but not a cardiologist. By connecting this clinic to a major metropolitan medical facility through a communications network, a minimally trained technician would be able to carry out the procedure under the supervision and guidance of a qualified cardiologist.

  8. Building a transnational biosurveillance network using semantic web technologies: requirements, design, and preliminary evaluation.

    Science.gov (United States)

    Teodoro, Douglas; Pasche, Emilie; Gobeill, Julien; Emonet, Stéphane; Ruch, Patrick; Lovis, Christian

    2012-05-29

    Antimicrobial resistance has reached globally alarming levels and is becoming a major public health threat. Lack of efficacious antimicrobial resistance surveillance systems was identified as one of the causes of increasing resistance, due to the lag time between new resistances and alerts to care providers. Several initiatives to track drug resistance evolution have been developed. However, no effective real-time and source-independent antimicrobial resistance monitoring system is available publicly. To design and implement an architecture that can provide real-time and source-independent antimicrobial resistance monitoring to support transnational resistance surveillance. In particular, we investigated the use of a Semantic Web-based model to foster integration and interoperability of interinstitutional and cross-border microbiology laboratory databases. Following the agile software development methodology, we derived the main requirements needed for effective antimicrobial resistance monitoring, from which we proposed a decentralized monitoring architecture based on the Semantic Web stack. The architecture uses an ontology-driven approach to promote the integration of a network of sentinel hospitals or laboratories. Local databases are wrapped into semantic data repositories that automatically expose local computing-formalized laboratory information in the Web. A central source mediator, based on local reasoning, coordinates the access to the semantic end points. On the user side, a user-friendly Web interface provides access and graphical visualization to the integrated views. We designed and implemented the online Antimicrobial Resistance Trend Monitoring System (ARTEMIS) in a pilot network of seven European health care institutions sharing 70+ million triples of information about drug resistance and consumption. Evaluation of the computing performance of the mediator demonstrated that, on average, query response time was a few seconds (mean 4.3, SD 0.1 × 10

  9. NASA's Proposed Requirements for the Global Aeronautical Network and a Summary of Responses

    Science.gov (United States)

    Ivancic, William D.

    2005-01-01

    In October 2003, NASA embarked on the ACAST project (Advanced CNS Architectures and System Technologies) to perform research and development on selected communications, navigation, and surveillance (CNS) technologies to enhance the performance of the National Airspace System (NAS). The Networking Research Group of NASA's ACAST project, in order to ensure global interoperability and deployment, formulated their own salient list of requirements. Many of these are not necessarily of concern to the FAA, but are a concern to those who have to deploy, operate, and pay for these systems. These requirements were submitted to the world s industries, governments, and academic institutions for comments. The results of that request for comments are summarized in this paper.

  10. Corporate Data Network (CDN) data requirements task. Enterprise Model. Volume 1

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol. 2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol. 3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  11. Corporate Data Network (CDN). Data Requirements Task. Preliminary Strategic Data Plan. Volume 4

    International Nuclear Information System (INIS)

    1985-11-01

    The NRC has initiated a multi-year program to centralize its information processing in a Corporate Data Network (CDN). The new information processing environment will include shared databases, telecommunications, office automation tools, and state-of-the-art software. Touche Ross and Company was contracted with to perform a general data requirements analysis for shared databases and to develop a preliminary plan for implementation of the CDN concept. The Enterprise Model (Vol. 1) provided the NRC with agency-wide information requirements in the form of data entities and organizational demand patterns as the basis for clustering the entities into logical groups. The Data Dictionary (Vol.2) provided the NRC with definitions and example attributes and properties for each entity. The Data Model (Vol.3) defined logical databases and entity relationships within and between databases. The Preliminary Strategic Data Plan (Vol. 4) prioritized the development of databases and included a workplan and approach for implementation of the shared database component of the Corporate Data Network

  12. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  13. A survey of system architecture requirements for health care-based wireless sensor networks.

    Science.gov (United States)

    Egbogah, Emeka E; Fapojuwo, Abraham O

    2011-01-01

    Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  14. A Survey of System Architecture Requirements for Health Care-Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2011-05-01

    Full Text Available Wireless Sensor Networks (WSNs have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera. However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  15. Effects of contact network structure on epidemic transmission trees: implications for data required to estimate network structure.

    Science.gov (United States)

    Carnegie, Nicole Bohme

    2018-01-30

    Understanding the dynamics of disease spread is key to developing effective interventions to control or prevent an epidemic. The structure of the network of contacts over which the disease spreads has been shown to have a strong influence on the outcome of the epidemic, but an open question remains as to whether it is possible to estimate contact network features from data collected in an epidemic. The approach taken in this paper is to examine the distributions of epidemic outcomes arising from epidemics on networks with particular structural features to assess whether that structure could be measured from epidemic data and what other constraints might be needed to make the problem identifiable. To this end, we vary the network size, mean degree, and transmissibility of the pathogen, as well as the network feature of interest: clustering, degree assortativity, or attribute-based preferential mixing. We record several standard measures of the size and spread of the epidemic, as well as measures that describe the shape of the transmission tree in order to ascertain whether there are detectable signals in the final data from the outbreak. The results suggest that there is potential to estimate contact network features from transmission trees or pure epidemic data, particularly for diseases with high transmissibility or for which the relevant contact network is of low mean degree. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. What is 5G? Emerging 5G Mobile Services and Network Requirements

    Directory of Open Access Journals (Sweden)

    Heejung Yu

    2017-10-01

    Full Text Available In this paper, emerging 5G mobile services are investigated and categorized from the perspective of not service providers, but end-users. The development of 5G mobile services is based on an intensive analysis of the global trends in mobile services. Additionally, several indispensable service requirements, essential for realizing service scenarios presented, are described. To illustrate the changes in societies and in daily life in the 5G era, five megatrends, including the explosion of mobile data traffic, the rapid increase in connected devices, everything on the cloud, hyper-realistic media for convergence services and knowledge as a service enabled by big-data analysis, are examined. Based on such trends, we classify the new 5G services into five categories in terms of the end-users’ experience as follows: immersive 5G services, intelligent 5G services, omnipresent 5G services, autonomous 5G services and public 5G services. Moreover, several 5G service scenarios in each service category are presented, and essential technical requirements for realizing the aforementioned 5G services are suggested, along with a competitiveness analysis on 5G services/devices/network industries and the current condition of 5G technologies.

  17. A decision-making framework to model environmental flow requirements in oasis areas using Bayesian networks

    Science.gov (United States)

    Xue, Jie; Gui, Dongwei; Zhao, Ying; Lei, Jiaqiang; Zeng, Fanjiang; Feng, Xinlong; Mao, Donglei; Shareef, Muhammad

    2016-09-01

    The competition for water resources between agricultural and natural oasis ecosystems has become an increasingly serious problem in oasis areas worldwide. Recently, the intensive extension of oasis farmland has led to excessive exploitation of water discharge, and consequently has resulted in a lack of water supply in natural oasis. To coordinate the conflicts, this paper provides a decision-making framework for modeling environmental flows in oasis areas using Bayesian networks (BNs). Three components are included in the framework: (1) assessment of agricultural economic loss due to meeting environmental flow requirements; (2) decision-making analysis using BNs; and (3) environmental flow decision-making under different water management scenarios. The decision-making criterion is determined based on intersection point analysis between the probability of large-level total agro-economic loss and the ratio of total to maximum agro-economic output by satisfying environmental flows. An application in the Qira oasis area of the Tarim Basin, Northwest China indicates that BNs can model environmental flow decision-making associated with agricultural economic loss effectively, as a powerful tool to coordinate water-use conflicts. In the case study, the environmental flow requirement is determined as 50.24%, 49.71% and 48.73% of the natural river flow in wet, normal and dry years, respectively. Without further agricultural economic loss, 1.93%, 0.66% and 0.43% of more river discharge can be allocated to eco-environmental water demands under the combined strategy in wet, normal and dry years, respectively. This work provides a valuable reference for environmental flow decision-making in any oasis area worldwide.

  18. Supplying the power requirements to a sensor network using radio frequency power transfer.

    Science.gov (United States)

    Percy, Steven; Knight, Chris; Cooray, Francis; Smart, Ken

    2012-01-01

    Wireless power transmission is a method of supplying power to small electronic devices when there is no wired connection. One way to increase the range of these systems is to use a directional transmitting antenna, the problem with this approach is that power can only be transmitted through a narrow beam and directly forward, requiring the transmitter to always be aligned with the sensor node position. The work outlined in this article describes the design and testing of an autonomous radio frequency power transfer system that is capable of rotating the base transmitter to track the position of sensor nodes and transferring power to that sensor node. The system's base station monitors the node's energy levels and forms a charge queue to plan charging order and maintain energy levels of the nodes. Results show a radio frequency harvesting circuit with a measured S11 value of -31.5 dB and a conversion efficiency of 39.1%. Simulation and experimentation verified the level of power transfer and efficiency. The results of this work show a small network of three nodes with different storage types powered by a central base node.

  19. Supplying the Power Requirements to a Sensor Network Using Radio Frequency Power Transfer

    Directory of Open Access Journals (Sweden)

    Steven Percy

    2012-06-01

    Full Text Available Wireless power transmission is a method of supplying power to small electronic devices when there is no wired connection. One way to increase the range of these systems is to use a directional transmitting antenna, the problem with this approach is that power can only be transmitted through a narrow beam and directly forward, requiring the transmitter to always be aligned with the sensor node position. The work outlined in this article describes the design and testing of an autonomous radio frequency power transfer system that is capable of rotating the base transmitter to track the position of sensor nodes and transferring power to that sensor node. The system’s base station monitors the node’s energy levels and forms a charge queue to plan charging order and maintain energy levels of the nodes. Results show a radio frequency harvesting circuit with a measured S11 value of −31.5 dB and a conversion efficiency of 39.1%. Simulation and experimentation verified the level of power transfer and efficiency. The results of this work show a small network of three nodes with different storage types powered by a central base node.

  20. Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.

    2009-01-01

    This paper focuses on a generic model for service network design, which includes asset positioning and utilization through constraints on asset availability at terminals. We denote these relations as "design-balance constraints" and focus on the design-balanced capacitated multicommodity network...... design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation....... Results on a wide range of network design problem instances from the literature indicate the proposed method behaves very well in terms of computational efficiency and solution quality....

  1. Quantification of the impacts of climate change and human agricultural activities on oasis water requirements in an arid region: a case study of the Heihe River basin, China

    Science.gov (United States)

    Liu, Xingran; Shen, Yanjun

    2018-03-01

    Ecological deterioration in arid regions caused by agricultural development has become a global issue. Understanding water requirements of the oasis ecosystems and the influences of human agricultural activities and climate change is important for the sustainable development of oasis ecosystems and water resource management in arid regions. In this study, water requirements of the main oasis in Heihe River basin during 1986-2013 were analyzed and the amount showed a sharp increase from 10.8 × 108 m3 in 1986 to 19.0 × 108 m3 in 2013. Both human agricultural activities and climate change could lead to the increase in water requirement. To quantify the contributions of agricultural activities and climate change to the increase in water requirements, partial derivative and slope method were used. Results showed that climate change and human agricultural activities, such as oasis expansion and changes in land cropping structure, has contributed to the increase in water requirement at rates of 6.9, 58.1, and 25.3 %, respectively. Overall, human agricultural activities were the dominant forces driving the increase in water requirement. In addition, the contribution of oasis expanding to the increased water requirement was significantly greater than that of other concerned variables. This reveals that controlling the oasis scale is extremely important and effective for balancing water for agriculture and ecosystems and to achieving a sustainable oasis development in arid regions.

  2. Grand Challenges: Science, Engineering, and Societal Advances, Requiring Networking and Information Technology Research and Development

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — ...the U.S. Government makes critical decisions about appropriate investments in IT R and D to help society forward both socially and economically. To inform that...

  3. WIRELESS SENSOR NETWORKS – ARCHITECTURE, SECURITY REQUIREMENTS, SECURITY THREATS AND ITS COUNTERMEASURES

    OpenAIRE

    Ranjit Panigrahi; Kalpana Sharma; M.K. Ghose

    2013-01-01

    Wireless Sensor Network (WSN) has a huge range of applications such as battlefield, surveillance, emergency rescue operation and smart home technology etc. Apart from its inherent constraints such as limited memory and energy resources, when deployed in hostile environmental conditions, the sensor nodes are vulnerable to physical capture and other security constraints. These constraints put security as a major challenge for the researchers in the field of computer networking. T...

  4. Actin-myosin network is required for proper assembly of influenza virus particles

    Energy Technology Data Exchange (ETDEWEB)

    Kumakura, Michiko; Kawaguchi, Atsushi, E-mail: ats-kawaguchi@md.tsukuba.ac.jp; Nagata, Kyosuke, E-mail: knagata@md.tsukuba.ac.jp

    2015-02-15

    Actin filaments are known to play a central role in cellular dynamics. After polymerization of actin, various actin-crosslinking proteins including non-muscle myosin II facilitate the formation of spatially organized actin filament networks. The actin-myosin network is highly expanded beneath plasma membrane. The genome of influenza virus (vRNA) replicates in the cell nucleus. Then, newly synthesized vRNAs are nuclear-exported to the cytoplasm as ribonucleoprotein complexes (vRNPs), followed by transport to the beneath plasma membrane where virus particles assemble. Here, we found that, by inhibiting actin-myosin network formation, the virus titer tends to be reduced and HA viral spike protein is aggregated on the plasma membrane. These results indicate that the actin-myosin network plays an important role in the virus formation. - Highlights: • Actin-myosin network is important for the influenza virus production. • HA forms aggregations at the plasma membrane in the presence of blebbistatin. • M1 is recruited to the budding site through the actin-myosin network.

  5. Actin-myosin network is required for proper assembly of influenza virus particles

    International Nuclear Information System (INIS)

    Kumakura, Michiko; Kawaguchi, Atsushi; Nagata, Kyosuke

    2015-01-01

    Actin filaments are known to play a central role in cellular dynamics. After polymerization of actin, various actin-crosslinking proteins including non-muscle myosin II facilitate the formation of spatially organized actin filament networks. The actin-myosin network is highly expanded beneath plasma membrane. The genome of influenza virus (vRNA) replicates in the cell nucleus. Then, newly synthesized vRNAs are nuclear-exported to the cytoplasm as ribonucleoprotein complexes (vRNPs), followed by transport to the beneath plasma membrane where virus particles assemble. Here, we found that, by inhibiting actin-myosin network formation, the virus titer tends to be reduced and HA viral spike protein is aggregated on the plasma membrane. These results indicate that the actin-myosin network plays an important role in the virus formation. - Highlights: • Actin-myosin network is important for the influenza virus production. • HA forms aggregations at the plasma membrane in the presence of blebbistatin. • M1 is recruited to the budding site through the actin-myosin network

  6. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement.

    Science.gov (United States)

    Ferri, Giovane Lopes; Chaves, Gisele de Lorena Diniz; Ribeiro, Glaydston Mattos

    2015-06-01

    This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the characteristic of social vulnerability, must be included in the system. In addition to the theoretical contribution to the reverse logistics network problem, this study aids in decision-making for public managers who have limited technical and administrative capacities for the management of solid wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    Agricultural Research Service 2011), which aim to improve consistency of field measurement and data collection for soil carbon sequestration and soil nitrous oxide fluxes. Often these national-level activity data and emissions factors are the basis for regional and smaller-scale applications. Such data are used for model-based estimates of changes in GHGs at a project or regional level (Olander et al 2011). To complement national data for regional-, landscape-, or field-level applications, new data are often collected through farmer knowledge or records and field sampling. Ideally such data could be collected in a standardized manner, perhaps through some type of crowd sourcing model to improve regional—and national—level data, as well as to improve consistency of locally collected data. Data can also be collected by companies working with agricultural suppliers and in country networks, within efforts aimed at understanding firm and product (supply-chain) sustainability and risks (FAO 2009). Such data may feed into various certification processes or reporting requirements from buyers. Unfortunately, this data is likely proprietary. A new process is needed to aggregate and share private data in a way that would not be a competitive concern so such data could complement or supplement national data and add value. A number of papers in this focus issue discuss issues surrounding quantification methods and systems at large scales, global and national levels, while others explore landscape- and field-scale approaches. A few explore the intersection of top-down and bottom-up data measurement and modeling approaches. 5. The agricultural greenhouse gas quantification project and ERL focus issue Important land management decisions are often made with poor or few data, especially in developing countries. Current systems for quantifying GHG emissions are inadequate in most low-income countries, due to a lack of funding, human resources, and infrastructure. Most non-Annex 1 countries

  8. Increased signaling entropy in cancer requires the scale-free property of protein interaction networks

    Science.gov (United States)

    Teschendorff, Andrew E.; Banerji, Christopher R. S.; Severini, Simone; Kuehn, Reimer; Sollich, Peter

    2015-01-01

    One of the key characteristics of cancer cells is an increased phenotypic plasticity, driven by underlying genetic and epigenetic perturbations. However, at a systems-level it is unclear how these perturbations give rise to the observed increased plasticity. Elucidating such systems-level principles is key for an improved understanding of cancer. Recently, it has been shown that signaling entropy, an overall measure of signaling pathway promiscuity, and computable from integrating a sample's gene expression profile with a protein interaction network, correlates with phenotypic plasticity and is increased in cancer compared to normal tissue. Here we develop a computational framework for studying the effects of network perturbations on signaling entropy. We demonstrate that the increased signaling entropy of cancer is driven by two factors: (i) the scale-free (or near scale-free) topology of the interaction network, and (ii) a subtle positive correlation between differential gene expression and node connectivity. Indeed, we show that if protein interaction networks were random graphs, described by Poisson degree distributions, that cancer would generally not exhibit an increased signaling entropy. In summary, this work exposes a deep connection between cancer, signaling entropy and interaction network topology. PMID:25919796

  9. Wireless Sensor Network for Helicopter Rotor Blade Vibration Monitoring: Requirements Definition and Technological Aspects

    NARCIS (Netherlands)

    Sanchez Ramirez, Andrea; Das, Kallol; Loendersloot, Richard; Tinga, Tiedo; Havinga, Paul J.M.; Basu, Biswajit

    The main rotor accounts for the largest vibration source for a helicopter fuselage and its components. However, accurate blade monitoring has been limited due to the practical restrictions on instrumenting rotating blades. The use of Wireless Sensor Networks (WSNs) for real time vibration monitoring

  10. 47 CFR 64.2007 - Approval required for use of customer proprietary network information.

    Science.gov (United States)

    2010-10-01

    ... of marketing communications-related services to that customer. A telecommunications carrier may..., for the purpose of marketing communications-related services to that customer, to its agents and its... proprietary network information. 64.2007 Section 64.2007 Telecommunication FEDERAL COMMUNICATIONS COMMISSION...

  11. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  12. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement

    Energy Technology Data Exchange (ETDEWEB)

    Ferri, Giovane Lopes, E-mail: giovane.ferri@aluno.ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Diniz Chaves, Gisele de Lorena, E-mail: gisele.chaves@ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Ribeiro, Glaydston Mattos, E-mail: glaydston@pet.coppe.ufrj.br [Transportation Engineering Programme, Federal University of Rio de Janeiro – UFRJ, Centro de Tecnologia, Bloco H, Sala 106, Cidade Universitária, Rio de Janeiro, 21949-900 (Brazil)

    2015-06-15

    Highlights: • We propose a reverse logistics network for MSW involving waste pickers. • A generic facility location mathematical model was validated in a Brazilian city. • The results enable to predict the capacity for screening and storage centres (SSC). • We minimise the costs for transporting MSW with screening and storage centres. • The use of SSC can be a potential source of revenue and a better use of MSW. - Abstract: This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the

  13. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement

    International Nuclear Information System (INIS)

    Ferri, Giovane Lopes; Diniz Chaves, Gisele de Lorena; Ribeiro, Glaydston Mattos

    2015-01-01

    Highlights: • We propose a reverse logistics network for MSW involving waste pickers. • A generic facility location mathematical model was validated in a Brazilian city. • The results enable to predict the capacity for screening and storage centres (SSC). • We minimise the costs for transporting MSW with screening and storage centres. • The use of SSC can be a potential source of revenue and a better use of MSW. - Abstract: This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the

  14. Experimental Study on OSNR Requirements for Spectrum-Flexible Optical Networks

    DEFF Research Database (Denmark)

    Borkowski, Robert; Karinou, Fotini; Angelou, Marianna

    2012-01-01

    on adaptive allocation of superchannels in spectrum-flexible heterogeneous optical network. In total, three superchannels were transmitted. Two 5-subcarrier 14-GHz-spaced, 14 Gbaud, polarization-division-multiplexed (PDM) quadrature-phase-shift-keyed (QPSK) superchannels were separated by a spectral gap...... to maintain a 1×10−3 bit error rate of the central BOI subcarrier. The results provide a rule of thumb that can be exploited in resource allocation mechanisms of future spectrum-flexible optical networks.......The flexibility and elasticity of the spectrum is an important topic today. As the capacity of deployed fiber-optic systems is becoming scarce, it is vital to shift towards solutions ensuring higher spectral efficiency. Working in this direction, we report an extensive experimental study...

  15. The nutritional requirements of infants. Towards EU alignment of reference values: the EURRECA network

    NARCIS (Netherlands)

    Hermoso, M.; Tabacchi, G.; Iglesia-Altaba, I.; Bel-Serrat, S.; Moreno-Aznar, L.A.; Garcia-Santos, Y.; Rosario Garcia-Luzardo, Del M.; Santana-Salguero, B.; Pena-Quintana, L.; Serra-Majem, L.; Hall Moran, V.; Dykes, F.; Decsi, T.; Benetou, V.; Plada, M.; Trichopoulou, A.; Raats, M.M.; Doets, E.L.; Berti, C.; Cetin, I.; Koletzko, B.

    2010-01-01

    This paper presents a review of the current knowledge regarding the macro- and micronutrient requirements of infants and discusses issues related to these requirements during the first year of life. The paper also reviews the current reference values used in European countries and the methodological

  16. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  17. User Requirements for Technology to Assist Aging in Place: Qualitative Study of Older People and Their Informal Support Networks.

    Science.gov (United States)

    Elers, Phoebe; Hunter, Inga; Whiddett, Dick; Lockhart, Caroline; Guesgen, Hans; Singh, Amardeep

    2018-06-06

    Informal support is essential for enabling many older people to age in place. However, there is limited research examining the information needs of older adults' informal support networks and how these could be met through home monitoring and information and communication technologies. The purpose of this study was to investigate how technologies that connect older adults to their informal and formal support networks could assist aging in place and enhance older adults' health and well-being. Semistructured interviews were conducted with 10 older adults and a total of 31 members of their self-identified informal support networks. They were asked questions about their information needs and how technology could support the older adults to age in place. The interviews were transcribed and thematically analyzed. The analysis identified three overarching themes: (1) the social enablers theme, which outlined how timing, informal support networks, and safety concerns assist the older adults' uptake of technology, (2) the technology concerns theme, which outlined concerns about cost, usability, information security and privacy, and technology superseding face-to-face contact, and (3) the information desired theme, which outlined what information should be collected and transferred and who should make decisions about this. Older adults and their informal support networks may be receptive to technology that monitors older adults within the home if it enables aging in place for longer. However, cost, privacy, security, and usability barriers would need to be considered and the system should be individualizable to older adults' changing needs. The user requirements identified from this study and described in this paper have informed the development of a technology that is currently being prototyped. ©Phoebe Elers, Inga Hunter, Dick Whiddett, Caroline Lockhart, Hans Guesgen, Amardeep Singh. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 06.06.2018.

  18. Requiring collaboration: Hippocampal-prefrontal networks needed in spatial working memory and ageing. A multivariate analysis approach.

    Science.gov (United States)

    Zancada-Menendez, C; Alvarez-Suarez, P; Sampedro-Piquero, P; Cuesta, M; Begega, A

    2017-04-01

    Ageing is characterized by a decline in the processes of retention and storage of spatial information. We have examined the behavioural performance of adult rats (3months old) and aged rats (18months old) in a spatial complex task (delayed match to sample). The spatial task was performed in the Morris water maze and consisted of three sessions per day over a period of three consecutive days. Each session consisted of two trials (one sample and retention) and inter-session intervals of 5min. Behavioural results showed that the spatial task was difficult for middle aged group. This worse execution could be associated with impairments of processing speed and spatial information retention. We examined the changes in the neuronal metabolic activity of different brain regions through cytochrome C oxidase histochemistry. Then, we performed MANOVA and Discriminant Function Analyses to determine the functional profile of the brain networks that are involved in the spatial learning of the adult and middle-aged groups. This multivariate analysis showed two principal functional networks that necessarily participate in this spatial learning. The first network was composed of the supramammillary nucleus, medial mammillary nucleus, CA3, and CA1. The second one included the anterior cingulate, prelimbic, and infralimbic areas of the prefrontal cortex, dentate gyrus, and amygdala complex (basolateral l and central subregions). There was a reduction in the hippocampal-supramammilar network in both learning groups, whilst there was an overactivation in the executive network, especially in the aged group. This response could be due to a higher requirement of the executive control in a complex spatial memory task in older animals. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Analysis of internal network requirements for the distributed Nordic Tier-1

    DEFF Research Database (Denmark)

    Behrmann, G.; Fischer, L.; Gamst, Mette

    2010-01-01

    The Tier-1 facility operated by the Nordic DataGrid Facility (NDGF) differs significantly from other Tier-1s in several aspects: It is not located at one or a few locations but instead distributed throughout the Nordic, it is not under the governance of a single organisation but but is instead...... build from resources under the control of a number of different national organisations. Being physically distributed makes the design and implementation of the networking infrastructure a challenge. NDGF has its own internal OPN connecting the sites participating in the distributed Tier-1. To assess...

  20. Requirements of the integration of renewable energy into network charge regulation. Proposals for the further development of the network charge system. Final report; Anforderungen der Integration der erneuerbaren Energien an die Netzentgeltregulierung. Vorschlaege zur Weiterentwicklung des Netzentgeltsystems. Endbericht

    Energy Technology Data Exchange (ETDEWEB)

    Friedrichsen, Nele; Klobasa, Marian; Marwitz, Simon [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Hilpert, Johannes; Sailer, Frank [Stiftung Umweltenergierecht, Wuerzburg (Germany)

    2016-11-15

    In this project we analyzed options to advance the network tariff system to support the German energy transition. A power system with high shares of renewables, requires more flexibility of supply and demand than the traditional system based on centralized, fossil power plants. Further, the power networks need to be adjusted and expanded. The transformation should aim at system efficiency i.e. look at both generation and network development. Network tariffs allocate the network cost towards network users. They also should provide incentives, e.g. to reduce peak load in periods of network congestion. Inappropriate network tariffs can hinder the provision of flexibility and thereby become a barrier towards system integration of renewable. Against this background, this report presents a systematic review of the German network tariff system and a discussion of several options to adapt the network tarif system in order to support the energy transition. The following aspects are analyzed: An adjustment of the privileges for industrial users to increase potential network benefits and reduce barriers towards a more market oriented behaviour. The payments for avoided network charges to distributed generation, that do not reflect cost reality in distribution networks anymore. Uniform transmission network tariffs as an option for a more appropriate allocation of cost associated with the energy transition. Increased standing fees in low voltage networks as an option to increase the cost-contribution of users with self-generation to network financing. Generator tariffs, to allocate a share of network cost to generators and provide incentives for network oriented location choice and/or feed-in.

  1. A Framework for the Management of Evolving Requirements in Software Systems Supporting Network-Centric Warfare

    National Research Council Canada - National Science Library

    Reynolds, Linda K

    2006-01-01

    .... There are many sources of requirements for these software systems supporting NCO, which may increase in number as the Services continue to develop the capabilities necessary for the transformation...

  2. Game Theory Meets Wireless Sensor Networks Security Requirements and Threats Mitigation: A Survey.

    Science.gov (United States)

    Abdalzaher, Mohamed S; Seddik, Karim; Elsabrouty, Maha; Muta, Osamu; Furukawa, Hiroshi; Abdel-Rahman, Adel

    2016-06-29

    We present a study of using game theory for protecting wireless sensor networks (WSNs) from selfish behavior or malicious nodes. Due to scalability, low complexity and disseminated nature of WSNs, malicious attacks can be modeled effectively using game theory. In this study, we survey the different game-theoretic defense strategies for WSNs. We present a taxonomy of the game theory approaches based on the nature of the attack, whether it is caused by an external attacker or it is the result of an internal node acting selfishly or maliciously. We also present a general trust model using game theory for decision making. We, finally, identify the significant role of evolutionary games for WSNs security against intelligent attacks; then, we list several prospect applications of game theory to enhance the data trustworthiness and node cooperation in different WSNs.

  3. Game Theory Meets Wireless Sensor Networks Security Requirements and Threats Mitigation: A Survey

    Directory of Open Access Journals (Sweden)

    Mohamed S. Abdalzaher

    2016-06-01

    Full Text Available We present a study of using game theory for protecting wireless sensor networks (WSNs from selfish behavior or malicious nodes. Due to scalability, low complexity and disseminated nature of WSNs, malicious attacks can be modeled effectively using game theory. In this study, we survey the different game-theoretic defense strategies for WSNs. We present a taxonomy of the game theory approaches based on the nature of the attack, whether it is caused by an external attacker or it is the result of an internal node acting selfishly or maliciously. We also present a general trust model using game theory for decision making. We, finally, identify the significant role of evolutionary games for WSNs security against intelligent attacks; then, we list several prospect applications of game theory to enhance the data trustworthiness and node cooperation in different WSNs.

  4. Using a Bayesian network to clarify areas requiring research in a host-pathogen system.

    Science.gov (United States)

    Bower, D S; Mengersen, K; Alford, R A; Schwarzkopf, L

    2017-12-01

    Bayesian network analyses can be used to interactively change the strength of effect of variables in a model to explore complex relationships in new ways. In doing so, they allow one to identify influential nodes that are not well studied empirically so that future research can be prioritized. We identified relationships in host and pathogen biology to examine disease-driven declines of amphibians associated with amphibian chytrid fungus (Batrachochytrium dendrobatidis). We constructed a Bayesian network consisting of behavioral, genetic, physiological, and environmental variables that influence disease and used them to predict host population trends. We varied the impacts of specific variables in the model to reveal factors with the most influence on host population trend. The behavior of the nodes (the way in which the variables probabilistically responded to changes in states of the parents, which are the nodes or variables that directly influenced them in the graphical model) was consistent with published results. The frog population had a 49% probability of decline when all states were set at their original values, and this probability increased when body temperatures were cold, the immune system was not suppressing infection, and the ambient environment was conducive to growth of B. dendrobatidis. These findings suggest the construction of our model reflected the complex relationships characteristic of host-pathogen interactions. Changes to climatic variables alone did not strongly influence the probability of population decline, which suggests that climate interacts with other factors such as the capacity of the frog immune system to suppress disease. Changes to the adaptive immune system and disease reservoirs had a large effect on the population trend, but there was little empirical information available for model construction. Our model inputs can be used as a base to examine other systems, and our results show that such analyses are useful tools for

  5. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  6. 47 CFR 64.2009 - Safeguards required for use of customer proprietary network information.

    Science.gov (United States)

    2010-10-01

    ... other manner, of their own and their affiliates' sales and marketing campaigns that use their customers... outbound marketing request for customer approval. (e) A telecommunications carrier must have an officer, as... 47 Telecommunication 3 2010-10-01 2010-10-01 false Safeguards required for use of customer...

  7. Quantification In Neurology

    Directory of Open Access Journals (Sweden)

    Netravati M

    2005-01-01

    Full Text Available There is a distinct shift of emphasis in clinical neurology in the last few decades. A few years ago, it was just sufficient for a clinician to precisely record history, document signs, establish diagnosis and write prescription. In the present context, there has been a significant intrusion of scientific culture in clinical practice. Several criteria have been proposed, refined and redefined to ascertain accurate diagnosis for many neurological disorders. Introduction of the concept of impairment, disability, handicap and quality of life has added new dimension to the measurement of health and disease and neurological disorders are no exception. "Best guess" treatment modalities are no more accepted and evidence based medicine has become an integral component of medical care. Traditional treatments need validation and new therapies require vigorous trials. Thus, proper quantification in neurology has become essential, both in practice and research methodology in neurology. While this aspect is widely acknowledged, there is a limited access to a comprehensive document pertaining to measurements in neurology. This following description is a critical appraisal of various measurements and also provides certain commonly used rating scales/scores in neurological practice.

  8. 47 CFR 27.16 - Network access requirements for Block C in the 746-757 and 776-787 MHz bands.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Network access requirements for Block C in the 746-757 and 776-787 MHz bands. 27.16 Section 27.16 Telecommunication FEDERAL COMMUNICATIONS COMMISSION... network restrictions on industry-wide consensus standards, such restrictions would be presumed reasonable...

  9. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  10. Dialing long distance : communications to northern operations like the MGP require sophisticated satellite networks for voice, data

    Energy Technology Data Exchange (ETDEWEB)

    Cook, D.

    2006-04-15

    Telecommunications will play a major role in the construction of the Mackenzie Gas Project due to the remoteness of its location and the volume of communication data required to support the number of people involved and the amount of construction activity. While suppliers for communications tools have not yet been identified, initial telecommunications plans call for the installation of communication equipment at all camps, major facility sites and construction locations. Equipment will be housed in self-contained, climate-controlled buildings called telecommunication service modules (TSMs), which will be connected to each other as well as to existing public communications networks. The infrastructure will support telephone and fax systems; Internet and electronic mail services; multiple channel very high frequency radios; air-to-ground communication at airstrips and helipads; ship-to-shore at barge landings; closed circuit television; satellite community antenna television; CBC radio broadcast; public address systems; security systems; and supervisory control and data acquisition (SCADA) systems. An Internet Protocol (IP) network with a voice telephone system will be implemented along with a geostationary orbit satellite network. Satellite servers and real-time data services will be used. Car kits that allow call and battery-operated self-contained telemetry devices designed to communicate via a satellite system have been commissioned for the project that are capable of providing cost-efficient and reliable asset tracking and fleet management in remote regions and assisting in deployment requirements. It was concluded that many of today's mega-projects are the driving factors behind new telecommunications solutions in remote areas. 1 fig.

  11. What is 5G? Emerging 5G Mobile Services and Network Requirements

    OpenAIRE

    Heejung Yu; Howon Lee; Hongbeom Jeon

    2017-01-01

    In this paper, emerging 5G mobile services are investigated and categorized from the perspective of not service providers, but end-users. The development of 5G mobile services is based on an intensive analysis of the global trends in mobile services. Additionally, several indispensable service requirements, essential for realizing service scenarios presented, are described. To illustrate the changes in societies and in daily life in the 5G era, five megatrends, including the explosion of mobi...

  12. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  13. In vivo MRS metabolite quantification using genetic optimization

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure.

  14. In vivo MRS metabolite quantification using genetic optimization

    International Nuclear Information System (INIS)

    Papakostas, G A; Mertzios, B G; Karras, D A; Van Ormondt, D; Graveron-Demilly, D

    2011-01-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure

  15. Quantification of growth-defense trade-offs in a common currency: nitrogen required for phenolamide biosynthesis is not derived from ribulose-1,5-bisphosphate carboxylase/oxygenase turnover.

    Science.gov (United States)

    Ullmann-Zeunert, Lynn; Stanton, Mariana A; Wielsch, Nathalie; Bartram, Stefan; Hummert, Christian; Svatoš, Aleš; Baldwin, Ian T; Groten, Karin

    2013-08-01

    Induced defenses are thought to be economical: growth and fitness-limiting resources are only invested into defenses when needed. To date, this putative growth-defense trade-off has not been quantified in a common currency at the level of individual compounds. Here, a quantification method for ¹⁵N-labeled proteins enabled a direct comparison of nitrogen (N) allocation to proteins, specifically, ribulose-1,5-bisposphate carboxylase/oxygenase (RuBisCO), as proxy for growth, with that to small N-containing defense metabolites (nicotine and phenolamides), as proxies for defense after herbivory. After repeated simulated herbivory, total N decreased in the shoots of wild-type (WT) Nicotiana attenuata plants, but not in two transgenic lines impaired in jasmonate defense signaling (irLOX3) and phenolamide biosynthesis (irMYB8). N was reallocated among different compounds within elicited rosette leaves: in the WT, a strong decrease in total soluble protein (TSP) and RuBisCO was accompanied by an increase in defense metabolites, irLOX3 showed a similar, albeit attenuated, pattern, whereas irMYB8 rosette leaves were the least responsive to elicitation, with overall higher levels of RuBisCO. Induced defenses were higher in the older compared with the younger rosette leaves, supporting the hypothesis that tissue developmental stage influences defense investments. We propose that MYB8, probably by regulating the production of phenolamides, indirectly mediates protein pool sizes after herbivory. Although the decrease in absolute N invested in TSP and RuBisCO elicited by simulated herbivory was much larger than the N-requirements of nicotine and phenolamide biosynthesis, ¹⁵N flux studies revealed that N for phenolamide synthesis originates from recently assimilated N, rather than from RuBisCO turnover. © 2013 The Authors The Plant Journal © 2013 John Wiley & Sons Ltd.

  16. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    Directory of Open Access Journals (Sweden)

    Kevin McCluskey

    2017-08-01

    Full Text Available The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD. The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol.

  17. Networking

    OpenAIRE

    Rauno Lindholm, Daniel; Boisen Devantier, Lykke; Nyborg, Karoline Lykke; Høgsbro, Andreas; Fries, de; Skovlund, Louise

    2016-01-01

    The purpose of this project was to examine what influencing factor that has had an impact on the presumed increasement of the use of networking among academics on the labour market and how it is expressed. On the basis of the influence from globalization on the labour market it can be concluded that the globalization has transformed the labour market into a market based on the organization of networks. In this new organization there is a greater emphasis on employees having social qualificati...

  18. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    Science.gov (United States)

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  19. Application of Fuzzy Comprehensive Evaluation Method in Trust Quantification

    Directory of Open Access Journals (Sweden)

    Shunan Ma

    2011-10-01

    Full Text Available Trust can play an important role for the sharing of resources and information in open network environments. Trust quantification is thus an important issue in dynamic trust management. By considering the fuzziness and uncertainty of trust, in this paper, we propose a fuzzy comprehensive evaluation method to quantify trust along with a trust quantification algorithm. Simulation results show that the trust quantification algorithm that we propose can effectively quantify trust and the quantified value of an entity's trust is consistent with the behavior of the entity.

  20. Charging and billing in modern communications networks : A comprehensive survey of the state of art and future requirements

    NARCIS (Netherlands)

    Kuehne, Ralph; Huitema, George; Carle, George

    2012-01-01

    In mobile telecommunication networks the trend for an increasing heterogeneity of access networks, the convergence with fixed networks as well as with the Internet are apparent. The resulting future converged network with an expected wide variety of services and a possibly stiff competition between

  1. Charging and billing in modern communications networks : A comprehensive survey of the state of the art and future requirements

    NARCIS (Netherlands)

    Kühne, R.; Huitema, G.B.; Carle, G.

    2012-01-01

    In mobile telecommunication networks the trend for an increasing heterogeneity of access networks, the convergence with fixed networks as well as with the Internet are apparent. The resulting future converged network with an expected wide variety of services and a possibly stiff competition between

  2. ASIC-dependent LTP at multiple glutamatergic synapses in amygdala network is required for fear memory.

    Science.gov (United States)

    Chiang, Po-Han; Chien, Ta-Chun; Chen, Chih-Cheng; Yanagawa, Yuchio; Lien, Cheng-Chang

    2015-05-19

    Genetic variants in the human ortholog of acid-sensing ion channel-1a subunit (ASIC1a) gene are associated with panic disorder and amygdala dysfunction. Both fear learning and activity-induced long-term potentiation (LTP) of cortico-basolateral amygdala (BLA) synapses are impaired in ASIC1a-null mice, suggesting a critical role of ASICs in fear memory formation. In this study, we found that ASICs were differentially expressed within the amygdala neuronal population, and the extent of LTP at various glutamatergic synapses correlated with the level of ASIC expression in postsynaptic neurons. Importantly, selective deletion of ASIC1a in GABAergic cells, including amygdala output neurons, eliminated LTP in these cells and reduced fear learning to the same extent as that found when ASIC1a was selectively abolished in BLA glutamatergic neurons. Thus, fear learning requires ASIC-dependent LTP at multiple amygdala synapses, including both cortico-BLA input synapses and intra-amygdala synapses on output neurons.

  3. Integration of hormonal signaling networks and mobile microRNAs is required for vascular patterning in Arabidopsis roots

    KAUST Repository

    Muraro, D.

    2013-12-31

    As multicellular organisms grow, positional information is continually needed to regulate the pattern in which cells are arranged. In the Arabidopsis root, most cell types are organized in a radially symmetric pattern; however, a symmetry-breaking event generates bisymmetric auxin and cytokinin signaling domains in the stele. Bidirectional cross-talk between the stele and the surrounding tissues involving a mobile transcription factor, SHORT ROOT (SHR), and mobile microRNA species also determines vascular pattern, but it is currently unclear how these signals integrate. We use a multicellular model to determine a minimal set of components necessary for maintaining a stable vascular pattern. Simulations perturbing the signaling network show that, in addition to the mutually inhibitory interaction between auxin and cytokinin, signaling through SHR, microRNA165/6, and PHABULOSA is required to maintain a stable bisymmetric pattern. We have verified this prediction by observing loss of bisymmetry in shr mutants. The model reveals the importance of several features of the network, namely the mutual degradation of microRNA165/6 and PHABULOSA and the existence of an additional negative regulator of cytokinin signaling. These components form a plausible mechanism capable of patterning vascular tissues in the absence of positional inputs provided by the transport of hormones from the shoot.

  4. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Integration of hormonal signaling networks and mobile microRNAs is required for vascular patterning in Arabidopsis roots

    KAUST Repository

    Muraro, D.; Mellor, N.; Pound, M. P.; Help, H.; Lucas, M.; Chopard, J.; Byrne, H. M.; Godin, C.; Hodgman, T. C.; King, J. R.; Pridmore, T. P.; Helariutta, Y.; Bennett, M. J.; Bishopp, A.

    2013-01-01

    As multicellular organisms grow, positional information is continually needed to regulate the pattern in which cells are arranged. In the Arabidopsis root, most cell types are organized in a radially symmetric pattern; however, a symmetry-breaking event generates bisymmetric auxin and cytokinin signaling domains in the stele. Bidirectional cross-talk between the stele and the surrounding tissues involving a mobile transcription factor, SHORT ROOT (SHR), and mobile microRNA species also determines vascular pattern, but it is currently unclear how these signals integrate. We use a multicellular model to determine a minimal set of components necessary for maintaining a stable vascular pattern. Simulations perturbing the signaling network show that, in addition to the mutually inhibitory interaction between auxin and cytokinin, signaling through SHR, microRNA165/6, and PHABULOSA is required to maintain a stable bisymmetric pattern. We have verified this prediction by observing loss of bisymmetry in shr mutants. The model reveals the importance of several features of the network, namely the mutual degradation of microRNA165/6 and PHABULOSA and the existence of an additional negative regulator of cytokinin signaling. These components form a plausible mechanism capable of patterning vascular tissues in the absence of positional inputs provided by the transport of hormones from the shoot.

  6. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  7. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Thermosetting polyimide resin matrix composites with interpenetrating polymer networks for precision foil resistor chips based on special mechanical performance requirements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y., E-mail: wxy@tju.edu.cn [School of Electronic Information Engineering, Tianjin University, Tianjin 300072 (China); Ma, J.X.; Li, C.G. [School of Electronic Information Engineering, Tianjin University, Tianjin 300072 (China); Wang, H.X. [ZHENGHE electronics Co., Ltd, Jining 272023 (China)

    2014-04-01

    Highlights: • Macromolecular materials were chosen to modify thermosetting polyimide (TSPI). • The formation of IPN structure in TSPI composite polymers was discussed. • The special mechanical properties required were the main study object. • The desired candidate materials should have proper hardness and toughness. • The specific mechanical data are quantitatively determined by experiments. - Abstract: Based on interpenetrating networks (IPNs) different macromolecular materials such as epoxy, phenolic, and silicone resin were chosen to modify thermosetting polyimide (TSPI) resin to solve the lack of performance when used for protecting precision foil resistor chips. Copolymerization modification, controlled at curing stage, was used to prepare TSPI composites considering both performance and process requirements. The mechanical properties related to trimming process were mainly studied due to the special requirements of the regularity of scratch edges caused by a tungsten needle. The analysis on scratch edges reveals that the generation and propagation of microcracks caused by scratching together with crack closure effect may lead to regular scratch traces. Experiments show that the elongation at break of TSPI composites is the main reason that determines the special mechanical properties. The desired candidate materials should have proper hardness and toughness, and the specific mechanical data are that the mean elongation at break and tensile strength of polymer materials are in the range of 9.2–10.4% and 100–107 MPa, respectively. Possible reasons for the effect of the modifiers chosen on TSPI polymers, the reaction mechanisms on modified TSPI resin and the IPN structure in TSPI composite polymers were discussed based on IR and TG analysis.

  9. Thermosetting polyimide resin matrix composites with interpenetrating polymer networks for precision foil resistor chips based on special mechanical performance requirements

    International Nuclear Information System (INIS)

    Wang, X.Y.; Ma, J.X.; Li, C.G.; Wang, H.X.

    2014-01-01

    Highlights: • Macromolecular materials were chosen to modify thermosetting polyimide (TSPI). • The formation of IPN structure in TSPI composite polymers was discussed. • The special mechanical properties required were the main study object. • The desired candidate materials should have proper hardness and toughness. • The specific mechanical data are quantitatively determined by experiments. - Abstract: Based on interpenetrating networks (IPNs) different macromolecular materials such as epoxy, phenolic, and silicone resin were chosen to modify thermosetting polyimide (TSPI) resin to solve the lack of performance when used for protecting precision foil resistor chips. Copolymerization modification, controlled at curing stage, was used to prepare TSPI composites considering both performance and process requirements. The mechanical properties related to trimming process were mainly studied due to the special requirements of the regularity of scratch edges caused by a tungsten needle. The analysis on scratch edges reveals that the generation and propagation of microcracks caused by scratching together with crack closure effect may lead to regular scratch traces. Experiments show that the elongation at break of TSPI composites is the main reason that determines the special mechanical properties. The desired candidate materials should have proper hardness and toughness, and the specific mechanical data are that the mean elongation at break and tensile strength of polymer materials are in the range of 9.2–10.4% and 100–107 MPa, respectively. Possible reasons for the effect of the modifiers chosen on TSPI polymers, the reaction mechanisms on modified TSPI resin and the IPN structure in TSPI composite polymers were discussed based on IR and TG analysis

  10. On the Development of Methodology for Planning and Cost-Modeling of a Wide Area Network

    OpenAIRE

    Ahmedi, Basri; Mitrevski, Pece

    2014-01-01

    The most important stages in designing a computer network in a wider geographical area include: definition of requirements, topological description, identification and calculation of relevant parameters (i.e. traffic matrix), determining the shortest path between nodes, quantification of the effect of various levels of technical and technological development of urban areas involved, the cost of technology, and the cost of services. These parameters differ for WAN networks in different regions...

  11. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  12. Meeting the future metro network challenges and requirements by adopting programmable S-BVT with direct-detection and PDM functionality

    Science.gov (United States)

    Nadal, Laia; Svaluto Moreolo, Michela; Fàbrega, Josep M.; Vílchez, F. Javier

    2017-07-01

    In this paper, we propose an advanced programmable sliceable-bandwidth variable transceiver (S-BVT) with polarization division multiplexing (PDM) capability as a key enabler to fulfill the requirements for future 5G networks. Thanks to its cost-effective optoelectronic front-end based on orthogonal frequency division multiplexing (OFDM) technology and direct-detection (DD), the proposed S-BVT becomes suitable for next generation highly flexible and scalable metro networks. Polarization beam splitters (PBSs) and controllers (PCs), available on-demand, are included at the transceivers and at the network nodes, further enhancing the system flexibility and promoting an efficient use of the spectrum. 40G-100G PDM transmission has been experimentally demonstrated, within a 4-node photonic mesh network (ADRENALINE testbed), implementing a simplified equalization process.

  13. Optimal Information Processing in Biochemical Networks

    Science.gov (United States)

    Wiggins, Chris

    2012-02-01

    A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.

  14. Eliciting end users requirements of a supportive system for tacit knowledge management processes in value networks : A Delphi study

    NARCIS (Netherlands)

    Bagheri, S.; Kusters, R.J.; Trienekens, J.J.M.

    2018-01-01

    Co-creation value with the aim of enhancing customer experience - through providing integrated solutions - relies on networked collaborations of multiple service providers and customers within value network (VN) settings. The customer-centric view of such collaborations highlights the importance of

  15. Eliciting end users requirements of a supportive system for tacit knowledge management processes in value networks : a Delphi study

    NARCIS (Netherlands)

    Bagheri, S.; Kusters, R.J.; Trienekens, J.J.M.

    2017-01-01

    —Co-creation value with the aim of enhancing customer experience—through providing integrated solutions— relies on networked collaborations of multiple service providers and customers within value network (VN) settings. The customer-centric view of such collaborations highlights the importance of

  16. A Study on the Quantitative Assessment Method of Software Requirement Documents Using Software Engineering Measures and Bayesian Belief Networks

    International Nuclear Information System (INIS)

    Eom, Heung Seop; Kang, Hyun Gook; Park, Ki Hong; Kwon, Kee Choon; Chang, Seung Cheol

    2005-01-01

    One of the major challenges in using the digital systems in a NPP is the reliability estimation of safety critical software embedded in the digital safety systems. Precise quantitative assessment of the reliability of safety critical software is nearly impossible, since many of the aspects to be considered are of qualitative nature and not directly measurable, but they have to be estimated for a practical use. Therefore an expert's judgment plays an important role in estimating the reliability of the software embedded in safety-critical systems in practice, because they can deal with all the diverse evidence relevant to the reliability and can perform an inference based on the evidence. But, in general, the experts' way of combining the diverse evidence and performing an inference is usually informal and qualitative, which is hard to discuss and will eventually lead to a debate about the conclusion. We have been carrying out research on a quantitative assessment of the reliability of safety critical software using Bayesian Belief Networks (BBN). BBN has been proven to be a useful modeling formalism because a user can represent a complex set of events and relationships in a fashion that can easily be interpreted by others. In the previous works we have assessed a software requirement specification of a reactor protection system by using our BBN-based assessment model. The BBN model mainly employed an expert's subjective probabilities as inputs. In the process of assessing the software requirement documents we found out that the BBN model was excessively dependent on experts' subjective judgments in a large part. Therefore, to overcome the weakness of our methodology we employed conventional software engineering measures into the BBN model as shown in this paper. The quantitative relationship between the conventional software measures and the reliability of software were not identified well in the past. Then recently there appeared a few researches on a ranking of

  17. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  18. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  19. Simulation Network for Test and Evaluation of Defense Systems. Phase I. Survey of DoD Testbed Requirements,

    Science.gov (United States)

    1983-05-15

    Interconnection (ISO 051) is the model used as a guide for this introduction to network protocols [30] T. Utsumi, " GLOSAS Project (GLObal Systems...Analysis and Simulation)," Proceedings of the 1980 Winter Simulation * Conference, Orlando, Fl., December, 1980, pp. 165-217. GLOSAS Project proposes the

  20. Telecommunication networks

    CERN Document Server

    Iannone, Eugenio

    2011-01-01

    Many argue that telecommunications network infrastructure is the most impressive and important technology ever developed. Analyzing the telecom market's constantly evolving trends, research directions, infrastructure, and vital needs, Telecommunication Networks responds with revolutionized engineering strategies to optimize network construction. Omnipresent in society, telecom networks integrate a wide range of technologies. These include quantum field theory for the study of optical amplifiers, software architectures for network control, abstract algebra required to design error correction co

  1. Late Noachian fluvial erosion on Mars: Cumulative water volumes required to carve the valley networks and grain size of bed-sediment

    Science.gov (United States)

    Rosenberg, Eliott N.; Head, James W., III

    2015-11-01

    Our goal is to quantify the cumulative water volume that was required to carve the Late Noachian valley networks on Mars. We employ an improved methodology in which fluid/sediment flux ratios are based on empirical data, not assumed. We use a large quantity of data from terrestrial rivers to assess the variability of actual fluid/sediment flux sediment ratios. We find the flow depth by using an empirical relationship to estimate the fluid flux from the estimated channel width, and then using estimated grain sizes (theoretical sediment grain size predictions and comparison with observations by the Curiosity rover) to find the flow depth to which the resulting fluid flux corresponds. Assuming that the valley networks contained alluvial bed rivers, we find, from their current slopes and widths, that the onset of suspended transport occurs near the sand-gravel boundary. Thus, any bed sediment must have been fine gravel or coarser, whereas fine sediment would be carried downstream. Subsequent to the cessation of fluvial activity, aeolian processes have partially redistributed fine-grain particles in the valleys, often forming dunes. It seems likely that the dominant bed sediment size was near the threshold for suspension, and assuming that this was the case could make our final results underestimates, which is the same tendency that our other assumptions have. Making this assumption, we find a global equivalent layer (GEL) of 3-100 m of water to be the most probable cumulative volume that passed through the valley networks. This value is similar to the ∼34 m water GEL currently on the surface and in the near-surface in the form of ice. Note that the amount of water required to carve the valley networks could represent the same water recycled through a surface valley network hydrological system many times in separate or continuous precipitation/runoff/collection/evaporation/precipitation cycles.

  2. Making big communities small: using network science to understand the ecological and behavioral requirements for community social capital.

    Science.gov (United States)

    Neal, Zachary

    2015-06-01

    The concept of social capital is becoming increasingly common in community psychology and elsewhere. However, the multiple conceptual and operational definitions of social capital challenge its utility as a theoretical tool. The goals of this paper are to clarify two forms of social capital (bridging and bonding), explicitly link them to the structural characteristics of small world networks, and explore the behavioral and ecological prerequisites of its formation. First, I use the tools of network science and specifically the concept of small-world networks to clarify what patterns of social relationships are likely to facilitate social capital formation. Second, I use an agent-based model to explore how different ecological characteristics (diversity and segregation) and behavioral tendencies (homophily and proximity) impact communities' potential for developing social capital. The results suggest diverse communities have the greatest potential to develop community social capital, and that segregation moderates the effects that the behavioral tendencies of homophily and proximity have on community social capital. The discussion highlights how these findings provide community-based researchers with both a deeper understanding of the contextual constraints with which they must contend, and a useful tool for targeting their efforts in communities with the greatest need or greatest potential.

  3. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  4. Intelligent Traffic Quantification System

    Science.gov (United States)

    Mohanty, Anita; Bhanja, Urmila; Mahapatra, Sudipta

    2017-08-01

    Currently, city traffic monitoring and controlling is a big issue in almost all cities worldwide. Vehicular ad-hoc Network (VANET) technique is an efficient tool to minimize this problem. Usually, different types of on board sensors are installed in vehicles to generate messages characterized by different vehicle parameters. In this work, an intelligent system based on fuzzy clustering technique is developed to reduce the number of individual messages by extracting important features from the messages of a vehicle. Therefore, the proposed fuzzy clustering technique reduces the traffic load of the network. The technique also reduces congestion and quantifies congestion.

  5. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  6. Data leakage quantification

    NARCIS (Netherlands)

    Vavilis, S.; Petkovic, M.; Zannone, N.; Atluri, V.; Pernul, G.

    2014-01-01

    The detection and handling of data leakages is becoming a critical issue for organizations. To this end, data leakage solutions are usually employed by organizations to monitor network traffic and the use of portable storage devices. These solutions often produce a large number of alerts, whose

  7. Quantification of informed opinion

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1985-01-01

    The objective of this session, Quantification of Informed Opinion, is to provide the statistician with a better understanding of this important area. The NRC uses informed opinion, sometimes called engineering judgment or subjective judgment, in many areas. Sometimes informed opinion is the only source of information that exists, especially in phenomenological areas, such as steam explosions, where experiments are costly and phenomena are very difficult to measure. There are many degrees of informed opinion. These vary from the weatherman who makes predictions concerning relatively high probability events with a large data base to the phenomenological expert who must use his intuition tempered with basic knowledge and little or no measured data to predict the behavior of events with a low probability of occurrence. The first paper in this session provides the reader with an overview of the subject area. The second paper provides some aspects that must be considered in the collection of informed opinion to improve the quality of the information. The final paper contains an example of the use of informed opinion in the area of seismic hazard characterization. These papers should be useful to researchers and statisticians who need to collect and use informed opinion in their work

  8. Quantification of competitive value of documents

    Directory of Open Access Journals (Sweden)

    Pavel Šimek

    2009-01-01

    Full Text Available The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.

  9. Automated Quantification of Pneumothorax in CT

    Science.gov (United States)

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  10. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  11. GMO quantification: valuable experience and insights for the future.

    Science.gov (United States)

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.

  12. Estimation of the Required Modeling Depth for the Simulation of Cable Switching in a Cable-based Network

    DEFF Research Database (Denmark)

    Silva, Filipe Faria Da; Bak, Claus Leth; Balle Holst, Per

    2012-01-01

    . If the area is too large, the simulation requires a long period of time and numerical problems are more likely to exist. This paper proposes a method that can be used to estimate the depth of the modeling area using the grid layout, which can be obtained directly from a PSS/E file, or equivalent...

  13. Artificial Neural Network Analysis of Xinhui Pericarpium Citri ...

    African Journals Online (AJOL)

    Methods: Artificial neural networks (ANN) models, including general regression neural network (GRNN) and multi-layer ... N-hexane (HPLC grade) was purchased from. Fisher Scientific. ..... Simultaneous Quantification of Seven Flavonoids in.

  14. Network Simulation

    CERN Document Server

    Fujimoto, Richard

    2006-01-01

    "Network Simulation" presents a detailed introduction to the design, implementation, and use of network simulation tools. Discussion topics include the requirements and issues faced for simulator design and use in wired networks, wireless networks, distributed simulation environments, and fluid model abstractions. Several existing simulations are given as examples, with details regarding design decisions and why those decisions were made. Issues regarding performance and scalability are discussed in detail, describing how one can utilize distributed simulation methods to increase the

  15. The 5’cap of Tobacco Mosaic Virus (TMV) is required for virion attachment to the actin/ER network during early infection

    DEFF Research Database (Denmark)

    Christensen, Nynne Meyn; Tilsner, Jens; Bell, Karen

    to the motile cortical actin/ER network within minutes of injection. Granule movement on actin/ER was arrested by actin inhibitors indicating actindependent RNA movement. The 5’ methylguanosine TMV cap was shown to be required for vRNA anchoring to the ER. TMV vRNA lacking the 5’cap failed to form granules...... the fluorescent vRNA pool nor co-injected GFP left the injected trichome, indicating that the synthesis of unlabelled progeny viral (v)RNA is required to initiate cell-cell movement, and that virus movement is not accompanied by passive plasmodesmatal gating. Cy3-vRNA formed granules that became anchored...... on the same ER-bound granules, indicating that TMV virions may become attached to the ER prior to uncoating of the viral genome....

  16. Is Your Biobank Up to Standards? A Review of the National Canadian Tissue Repository Network Required Operational Practice Standards and the Controlled Documents of a Certified Biobank.

    Science.gov (United States)

    Hartman, Victoria; Castillo-Pelayo, Tania; Babinszky, Sindy; Dee, Simon; Leblanc, Jodi; Matzke, Lise; O'Donoghue, Sheila; Carpenter, Jane; Carter, Candace; Rush, Amanda; Byrne, Jennifer; Barnes, Rebecca; Mes-Messons, Anne-Marie; Watson, Peter

    2018-02-01

    Ongoing quality management is an essential part of biobank operations and the creation of high quality biospecimen resources. Adhering to the standards of a national biobanking network is a way to reduce variability between individual biobank processes, resulting in cross biobank compatibility and more consistent support for health researchers. The Canadian Tissue Repository Network (CTRNet) implemented a set of required operational practices (ROPs) in 2011 and these serve as the standards and basis for the CTRNet biobank certification program. A review of these 13 ROPs covering 314 directives was conducted after 5 years to identify areas for revision and update, leading to changes to 7/314 directives (2.3%). A review of all internal controlled documents (including policies, standard operating procedures and guides, and forms for actions and processes) used by the BC Cancer Agency's Tumor Tissue Repository (BCCA-TTR) to conform to these ROPs was then conducted. Changes were made to 20/106 (19%) of BCCA-TTR documents. We conclude that a substantial fraction of internal controlled documents require updates at regular intervals to accommodate changes in best practices. Reviewing documentation is an essential aspect of keeping up to date with best practices and ensuring the quality of biospecimens and data managed by biobanks.

  17. Damage Localization and Quantification of Earthquake Excited RC-Frames

    DEFF Research Database (Denmark)

    Skjærbæk, P.S.; Nielsen, Søren R.K.; Kirkegaard, Poul Henning

    In the paper a recently proposed method for damage localization and quantification of RC-structures from response measurements is tested on experimental data. The method investigated requires at least one response measurement along the structure and the ground surface acceleration. Further, the t...

  18. Double-layer Tablets of Lornoxicam: Validation of Quantification ...

    African Journals Online (AJOL)

    Double-layer Tablets of Lornoxicam: Validation of Quantification Method, In vitro Dissolution and Kinetic Modelling. ... Satisfactory results were obtained from all the tablet formulations met compendial requirements. The slowest drug release rate was obtained with tablet cores based on PVP K90 (1.21 mg%.h-1).

  19. Protocol for Quantification of Defects in Natural Fibres for Composites

    DEFF Research Database (Denmark)

    Mortensen, Ulrich Andreas; Madsen, Bo

    2014-01-01

    Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based...

  20. SPDEF is required for mouse pulmonary goblet cell differentiation and regulates a network of genes associated with mucus production.

    Science.gov (United States)

    Chen, Gang; Korfhagen, Thomas R; Xu, Yan; Kitzmiller, Joseph; Wert, Susan E; Maeda, Yutaka; Gregorieff, Alexander; Clevers, Hans; Whitsett, Jeffrey A

    2009-10-01

    Various acute and chronic inflammatory stimuli increase the number and activity of pulmonary mucus-producing goblet cells, and goblet cell hyperplasia and excess mucus production are central to the pathogenesis of chronic pulmonary diseases. However, little is known about the transcriptional programs that regulate goblet cell differentiation. Here, we show that SAM-pointed domain-containing Ets-like factor (SPDEF) controls a transcriptional program critical for pulmonary goblet cell differentiation in mice. Initial cell-lineage-tracing analysis identified nonciliated secretory epithelial cells, known as Clara cells, as the progenitors of goblet cells induced by pulmonary allergen exposure in vivo. Furthermore, in vivo expression of SPDEF in Clara cells caused rapid and reversible goblet cell differentiation in the absence of cell proliferation. This was associated with enhanced expression of genes regulating goblet cell differentiation and protein glycosylation, including forkhead box A3 (Foxa3), anterior gradient 2 (Agr2), and glucosaminyl (N-acetyl) transferase 3, mucin type (Gcnt3). Consistent with these findings, levels of SPDEF and FOXA3 were increased in mouse goblet cells after sensitization with pulmonary allergen, and the proteins were colocalized in goblet cells lining the airways of patients with chronic lung diseases. Deletion of the mouse Spdef gene resulted in the absence of goblet cells in tracheal/laryngeal submucosal glands and in the conducting airway epithelium after pulmonary allergen exposure in vivo. These data show that SPDEF plays a critical role in regulating a transcriptional network mediating the goblet cell differentiation and mucus hyperproduction associated with chronic pulmonary disorders.

  1. Arrays of microLEDs and astrocytes: biological amplifiers to optogenetically modulate neuronal networks reducing light requirement.

    Directory of Open Access Journals (Sweden)

    Rolando Berlinguer-Palmini

    Full Text Available In the modern view of synaptic transmission, astrocytes are no longer confined to the role of merely supportive cells. Although they do not generate action potentials, they nonetheless exhibit electrical activity and can influence surrounding neurons through gliotransmitter release. In this work, we explored whether optogenetic activation of glial cells could act as an amplification mechanism to optical neural stimulation via gliotransmission to the neural network. We studied the modulation of gliotransmission by selective photo-activation of channelrhodopsin-2 (ChR2 and by means of a matrix of individually addressable super-bright microLEDs (μLEDs with an excitation peak at 470 nm. We combined Ca2+ imaging techniques and concurrent patch-clamp electrophysiology to obtain subsequent glia/neural activity. First, we tested the μLEDs efficacy in stimulating ChR2-transfected astrocyte. ChR2-induced astrocytic current did not desensitize overtime, and was linearly increased and prolonged by increasing μLED irradiance in terms of intensity and surface illumination. Subsequently, ChR2 astrocytic stimulation by broad-field LED illumination with the same spectral profile, increased both glial cells and neuronal calcium transient frequency and sEPSCs suggesting that few ChR2-transfected astrocytes were able to excite surrounding not-ChR2-transfected astrocytes and neurons. Finally, by using the μLEDs array to selectively light stimulate ChR2 positive astrocytes we were able to increase the synaptic activity of single neurons surrounding it. In conclusion, ChR2-transfected astrocytes and μLEDs system were shown to be an amplifier of synaptic activity in mixed corticalneuronal and glial cells culture.

  2. Arrays of microLEDs and astrocytes: biological amplifiers to optogenetically modulate neuronal networks reducing light requirement.

    Science.gov (United States)

    Berlinguer-Palmini, Rolando; Narducci, Roberto; Merhan, Kamyar; Dilaghi, Arianna; Moroni, Flavio; Masi, Alessio; Scartabelli, Tania; Landucci, Elisa; Sili, Maria; Schettini, Antonio; McGovern, Brian; Maskaant, Pleun; Degenaar, Patrick; Mannaioni, Guido

    2014-01-01

    In the modern view of synaptic transmission, astrocytes are no longer confined to the role of merely supportive cells. Although they do not generate action potentials, they nonetheless exhibit electrical activity and can influence surrounding neurons through gliotransmitter release. In this work, we explored whether optogenetic activation of glial cells could act as an amplification mechanism to optical neural stimulation via gliotransmission to the neural network. We studied the modulation of gliotransmission by selective photo-activation of channelrhodopsin-2 (ChR2) and by means of a matrix of individually addressable super-bright microLEDs (μLEDs) with an excitation peak at 470 nm. We combined Ca2+ imaging techniques and concurrent patch-clamp electrophysiology to obtain subsequent glia/neural activity. First, we tested the μLEDs efficacy in stimulating ChR2-transfected astrocyte. ChR2-induced astrocytic current did not desensitize overtime, and was linearly increased and prolonged by increasing μLED irradiance in terms of intensity and surface illumination. Subsequently, ChR2 astrocytic stimulation by broad-field LED illumination with the same spectral profile, increased both glial cells and neuronal calcium transient frequency and sEPSCs suggesting that few ChR2-transfected astrocytes were able to excite surrounding not-ChR2-transfected astrocytes and neurons. Finally, by using the μLEDs array to selectively light stimulate ChR2 positive astrocytes we were able to increase the synaptic activity of single neurons surrounding it. In conclusion, ChR2-transfected astrocytes and μLEDs system were shown to be an amplifier of synaptic activity in mixed corticalneuronal and glial cells culture.

  3. THESEUS: A wavelength division multiplexed/microwave subcarrier multiplexed optical network, its ATM switch applications and device requirements

    Science.gov (United States)

    Xin, Wei

    1997-10-01

    A Terabit Hybrid Electro-optical /underline[Se]lf- routing Ultrafast Switch (THESEUS) has been proposed. It is a self-routing wavelength division multiplexed (WDM) / microwave subcarrier multiplexed (SCM) asynchronous transfer mode (ATM) switch for the multirate ATM networks. It has potential to be extended to a large ATM switch as 1000 x 1000 without internal blocking. Among the advantages of the hybrid implementation are flexibility in service upgrade, relaxed tolerances on optical filtering, protocol simplification and less processing overhead. For a small ATM switch, the subcarrier can be used as output buffers to solve output contention. A mathematical analysis was conducted to evaluate different buffer configurations. A testbed has been successfully constructed. Multirate binary data streams have been switched through the testbed and error free reception ([<]10-9 bit error rate) has been achieved. A simple, intuitive theoretical model has been developed to describe the heterodyne optical beat interference. A new concept of interference time and interference length has been introduced. An experimental confirmation has been conducted. The experimental results match the model very well. It shows that a large portion of optical bandwidth is wasted due to the beat interference. Based on the model, several improvement approaches have been proposed. The photo-generated carrier lifetime of silicon germanium has been measured using time-resolved reflectivity measurement. Via oxygen ion implantation, the carrier lifetime has been reduced to as short as 1 ps, corresponding to 1 THz of photodetector bandwidth. It has also been shown that copper dopants act as recombination centers in the silicon germanium.

  4. Network Security Guideline

    Science.gov (United States)

    1993-06-01

    3.2.15.3 ISDN Services over the Telephone Network 3 Integrated Services Digital Network (ISDN) services are subject to the same restrictions as router...to be audited: [SYS$SYSTEM]SYS.EXE, LOGINOUT.EXE, STARTUP.COM, RIGHTSLIST.DAT [SYS$ LIBARY ] I SECURESHR.EXE [SYS$ROOT] SYSEXE.DIR, SYSLIB.DIR...quantification) of the encoded value; ASCII is normally used for asynchronous transmission. compare with digital . ASYNCHRONOUS-Data transmission that is

  5. Iron overload in the liver diagnostic and quantification

    International Nuclear Information System (INIS)

    Alustiza, Jose M.; Castiella, Agustin; Juan, Maria D. de; Emparanza, Jose I.; Artetxe, Jose; Uranga, Maite

    2007-01-01

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification

  6. Iron overload in the liver diagnostic and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)

    2007-03-15

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.

  7. Embedded XML DOM Parser: An Approach for XML Data Processing on Networked Embedded Systems with Real-Time Requirements

    Directory of Open Access Journals (Sweden)

    Cavia Soto MAngeles

    2008-01-01

    Full Text Available Abstract Trends in control and automation show an increase in data processing and communication in embedded automation controllers. The eXtensible Markup Language (XML is emerging as a dominant data syntax, fostering interoperability, yet little is still known about how to provide predictable real-time performance in XML processing, as required in the domain of industrial automation. This paper presents an XML processor that is designed with such real-time performance in mind. The publication attempts to disclose insight gained in applying techniques such as object pooling and reuse, and other methods targeted at avoiding dynamic memory allocation and its consequent memory fragmentation. Benchmarking tests are reported in order to illustrate the benefits of the approach.

  8. CCN2 is required for the TGF-β induced activation of Smad1-Erk1/2 signaling network.

    Directory of Open Access Journals (Sweden)

    Sashidhar S Nakerakanti

    Full Text Available Connective tissue growth factor (CCN2 is a multifunctional matricellular protein, which is frequently overexpressed during organ fibrosis. CCN2 is a mediator of the pro-fibrotic effects of TGF-β in cultured cells, but the specific function of CCN2 in the fibrotic process has not been elucidated. In this study we characterized the CCN2-dependent signaling pathways that are required for the TGF-β induced fibrogenic response. By depleting endogenous CCN2 we show that CCN2 is indispensable for the TGF-β-induced phosphorylation of Smad1 and Erk1/2, but it is unnecessary for the activation of Smad3. TGF-β stimulation triggered formation of the CCN2/β(3 integrin protein complexes and activation of Src signaling. Furthermore, we demonstrated that signaling through the α(vβ(3 integrin receptor and Src was required for the TGF-β induced Smad1 phosphorylation. Recombinant CCN2 activated Src and Erk1/2 signaling, and induced phosphorylation of Fli1, but was unable to stimulate Smad1 or Smad3 phosphorylation. Additional experiments were performed to investigate the role of CCN2 in collagen production. Consistent with the previous studies, blockade of CCN2 abrogated TGF-β-induced collagen mRNA and protein levels. Recombinant CCN2 potently stimulated collagen mRNA levels and upregulated activity of the COL1A2 promoter, however CCN2 was a weak inducer of collagen protein levels. CCN2 stimulation of collagen was dose-dependent with the lower doses (<50 ng/ml having a stimulatory effect and higher doses having an inhibitory effect on collagen gene expression. In conclusion, our study defines a novel CCN2/α(vβ(3 integrin/Src/Smad1 axis that contributes to the pro-fibrotic TGF-β signaling and suggests that blockade of this pathway may be beneficial for the treatment of fibrosis.

  9. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2014-01-01

    Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  10. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    Science.gov (United States)

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  11. Quantification practices in the nuclear industry

    International Nuclear Information System (INIS)

    1986-01-01

    In this chapter the quantification of risk practices adopted by the nuclear industries in Germany, Britain and France are examined as representative of the practices adopted throughout Europe. From this examination a number of conclusions are drawn about the common features of the practices adopted. In making this survey, the views expressed in the report of the Task Force on Safety Goals/Objectives appointed by the Commission of the European Communities, are taken into account. For each country considered, the legal requirements for presentation of quantified risk assessment as part of the licensing procedure are examined, and the way in which the requirements have been developed for practical application are then examined. (author)

  12. Reconfigurable network processing platforms

    NARCIS (Netherlands)

    Kachris, C.

    2007-01-01

    This dissertation presents our investigation on how to efficiently exploit reconfigurable hardware to design flexible, high performance, and power efficient network devices capable to adapt to varying processing requirements of network applications and traffic. The proposed reconfigurable network

  13. Quantification of renal function

    International Nuclear Information System (INIS)

    Mubarak, Amani Hayder

    1999-06-01

    The evaluation of glomerular filtration rate (GFR) with Tc99m-DTPA using single injection with multiple blood sample method (plasma clearance), is a standard and reliable method but the procedure is complicated and may not suitable for routine clinical use. Alternatively, estimation of GFR by using Tc99m-DTPA and gamma camera computer system is very simple, dose not require sampling of blood or urine and provide individual kidney value of GFR (integral, uptake index methods)

  14. Quantification of abdominal aortic deformation after EVAR

    Science.gov (United States)

    Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir

    2009-02-01

    Quantification of abdominal aortic deformation is an important requirement for the evaluation of endovascular stenting procedures and the further refinement of stent graft design. During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and, the stent graft. This deformation can affect the flow characteristics and morphology of the aorta which have been shown to be elicitors for stent graft failures and be reason for reappearance of aneurysms. We present a method for quantifying the deformation of an aneurysmatic aorta imposed by an inserted stent graft device. The outline of the procedure includes initial rigid alignment of the two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. This is accomplished by preprocessing and remodeling of the pre- and postoperative aortic shapes before performing a non-rigid registration. We further narrow the resulting displacement fields to only include local non-rigid deformation and therefore, eliminate all remaining global rigid transformations. Finally, deformations for specified locations can be calculated from the resulting displacement fields. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results and evaluation of the usage of deformation quantification were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.

  15. PCR amplification of repetitive sequences as a possible approach in relative species quantification

    DEFF Research Database (Denmark)

    Ballin, Nicolai Zederkopff; Vogensen, Finn Kvist; Karlsson, Anders H

    2012-01-01

    Abstract Both relative and absolute quantifications are possible in species quantification when single copy genomic DNA is used. However, amplification of single copy genomic DNA does not allow a limit of detection as low as one obtained from amplification of repetitive sequences. Amplification...... of repetitive sequences is therefore frequently used in absolute quantification but problems occur in relative quantification as the number of repetitive sequences is unknown. A promising approach was developed where data from amplification of repetitive sequences were used in relative quantification of species...... to relatively quantify the amount of chicken DNA in a binary mixture of chicken DNA and pig DNA. However, the designed PCR primers lack the specificity required for regulatory species control....

  16. Reliability of lifeline networks under seismic hazard

    International Nuclear Information System (INIS)

    Selcuk, A. Sevtap; Yuecemen, M. Semih

    1999-01-01

    Lifelines, such as pipelines, transportation, communication and power transmission systems, are networks which extend spatially over large geographical regions. The quantification of the reliability (survival probability) of a lifeline under seismic threat requires attention, as the proper functioning of these systems during or after a destructive earthquake is vital. In this study, a lifeline is idealized as an equivalent network with the capacity of its elements being random and spatially correlated and a comprehensive probabilistic model for the assessment of the reliability of lifelines under earthquake loads is developed. The seismic hazard that the network is exposed to is described by a probability distribution derived by using the past earthquake occurrence data. The seismic hazard analysis is based on the 'classical' seismic hazard analysis model with some modifications. An efficient algorithm developed by Yoo and Deo (Yoo YB, Deo N. A comparison of algorithms for terminal pair reliability. IEEE Transactions on Reliability 1988; 37: 210-215) is utilized for the evaluation of the network reliability. This algorithm eliminates the CPU time and memory capacity problems for large networks. A comprehensive computer program, called LIFEPACK is coded in Fortran language in order to carry out the numerical computations. Two detailed case studies are presented to show the implementation of the proposed model

  17. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN

    Directory of Open Access Journals (Sweden)

    Sanz Nuria

    2009-10-01

    Full Text Available Abstract Background Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. Methods In order to summarise the current situation in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching an agreement on a common classification was the initial step in the development of the survey. Results The ECRIN transnational working group on regulation, composed of experts in the field of clinical research from ten European countries, defined seven major categories of clinical research that seem relevant from both the regulatory and the scientific points of view, and correspond to congruent definitions in all countries: clinical trials on medicinal products; clinical trials on medical devices; other therapeutic trials (including surgery trials, transplantation trials, transfusion trials, trials with cell therapy, etc.; diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.; and epidemiology studies. Our classification was essential to develop a survey focused on protocol submission to ethics committees and competent authorities, procedures for amendments, requirements for sponsor and insurance, and adverse event reporting following five main phases: drafting, consensus, data collection, validation, and finalising. Conclusion The list of clinical research categories as used for the survey could serve as a contribution to the, much needed, task of harmonisation and

  18. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN)

    LENUS (Irish Health Repository)

    Kubiak, Christine

    2009-10-16

    Abstract Background Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. Methods In order to summarise the current situation in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN) developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching an agreement on a common classification was the initial step in the development of the survey. Results The ECRIN transnational working group on regulation, composed of experts in the field of clinical research from ten European countries, defined seven major categories of clinical research that seem relevant from both the regulatory and the scientific points of view, and correspond to congruent definitions in all countries: clinical trials on medicinal products; clinical trials on medical devices; other therapeutic trials (including surgery trials, transplantation trials, transfusion trials, trials with cell therapy, etc.); diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.); and epidemiology studies. Our classification was essential to develop a survey focused on protocol submission to ethics committees and competent authorities, procedures for amendments, requirements for sponsor and insurance, and adverse event reporting following five main phases: drafting, consensus, data collection, validation, and finalising. Conclusion The list of clinical research categories as used for the survey could serve as a contribution to the, much needed, task of harmonisation and simplification of the

  19. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  20. Modeling In-Network Aggregation in VANETs

    NARCIS (Netherlands)

    Dietzel, Stefan; Kargl, Frank; Heijenk, Geert; Schaub, Florian

    2011-01-01

    The multitude of applications envisioned for vehicular ad hoc networks requires efficient communication and dissemination mechanisms to prevent network congestion. In-network data aggregation promises to reduce bandwidth requirements and enable scalability in large vehicular networks. However, most

  1. The H3K27 Demethylase JMJD3 Is Required for Maintenance of the Embryonic Respiratory Neuronal Network, Neonatal Breathing, and Survival

    Directory of Open Access Journals (Sweden)

    Thomas Burgold

    2012-11-01

    Full Text Available JMJD3 (KDM6B antagonizes Polycomb silencing by demethylating lysine 27 on histone H3. The interplay of methyltransferases and demethylases at this residue is thought to underlie critical cell fate transitions, and the dynamics of H3K27me3 during neurogenesis posited for JMJD3 a critical role in the acquisition of neural fate. Despite evidence of its involvement in early neural commitment, however, its role in the emergence and maturation of the mammalian CNS remains unknown. Here, we inactivated Jmjd3 in the mouse and found that its loss causes perinatal lethality with the complete and selective disruption of the pre-Bötzinger complex (PBC, the pacemaker of the respiratory rhythm generator. Through genetic and electrophysiological approaches, we show that the enzymatic activity of JMJD3 is selectively required for the maintenance of the PBC and controls critical regulators of PBC activity, uncovering an unanticipated role of this enzyme in the late structuring and function of neuronal networks.

  2. Communications infrastructure requirements for telemedicine/telehealth in the context of planning for and responding to natural disasters: Considering the need for shared regional networks

    Science.gov (United States)

    Scott, John Carver

    1991-01-01

    During the course of recent years the frequency and magnitude of major disasters - of natural, technological, or ecological origin - have made the world community dramatically aware of the immense losses of human life and economic resources that are caused regularly by such calamities. Particularly hard hit are developing countries, for whom the magnitude of disasters frequently outstrips the ability of the society to cope with them. In many cases this situation can be prevented, and the recent trend in disaster management has been to emphasize the importance of preparedness and mitigation as a means of prevention. In cases of disaster, a system is needed to respond to relief requirements, particularly the delivery of medical care. There is no generic telecommunications infrastructure appropriate for the variety of applications in medical care and disaster management. The need to integrate telemedicine/telehealth into shared regional disaster management telecommunications networks is discussed. Focus is on the development of infrastructure designed to serve the needs of disaster prone regions of the developing world.

  3. Classification and quantification of reserve requirements for balancing

    NARCIS (Netherlands)

    Frunt, J.; Kling, W.L.; Bosch, van den P.P.J.

    2010-01-01

    In electrical power systems there must always be a balance between supply and demand of power. Any imbalance will result in a frequency deviation. To reduce the imbalance to zero, ancillary services for balance management are in use. Ancillary services for balance management are characterized by

  4. Mapping and Quantification of Vascular Branching in Plants, Animals and Humans by VESGEN Software

    Science.gov (United States)

    Parsons-Wingerter, P. A.; Vickerman, M. B.; Keith, P. A.

    2010-01-01

    Humans face daunting challenges in the successful exploration and colonization of space, including adverse alterations in gravity and radiation. The Earth-determined biology of plants, animals and humans is significantly modified in such extraterrestrial environments. One physiological requirement shared by larger plants and animals with humans is a complex, highly branching vascular system that is dynamically responsive to cellular metabolism, immunological protection and specialized cellular/tissue function. VESsel GENeration (VESGEN) Analysis has been developed as a mature beta version, pre-release research software for mapping and quantification of the fractal-based complexity of vascular branching. Alterations in vascular branching pattern can provide informative read-outs of altered vascular regulation. Originally developed for biomedical applications in angiogenesis, VESGEN 2D has provided novel insights into the cytokine, transgenic and therapeutic regulation of angiogenesis, lymphangiogenesis and other microvascular remodeling phenomena. Vascular trees, networks and tree-network composites are mapped and quantified. Applications include disease progression from clinical ophthalmic images of the human retina; experimental regulation of vascular remodeling in the mouse retina; avian and mouse coronary vasculature, and other experimental models in vivo. We envision that altered branching in the leaves of plants studied on ISS such as Arabidopsis thaliana cans also be analyzed.

  5. Direct Quantification of Cd2+ in the Presence of Cu2+ by a Combination of Anodic Stripping Voltammetry Using a Bi-Film-Modified Glassy Carbon Electrode and an Artificial Neural Network.

    Science.gov (United States)

    Zhao, Guo; Wang, Hui; Liu, Gang

    2017-07-03

    Abstract : In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd 2+ in the presence of Cu 2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu 2+ concentration on the stripping response to Cd 2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of Cd 2+ in the presence of Cu 2+ was investigated. A BP-ANN with two inputs and one output was used to establish the nonlinear relationship between the concentration of Cd 2+ and the stripping peak currents of Cu 2+ and Cd 2+ . The factors affecting the SWASV detection of Cd 2+ and the key parameters of the BP-ANN were optimized. Moreover, the direct calibration model (i.e., adding 0.1 mM ferrocyanide before detection), the BP-ANN model and other prediction models were compared to verify the prediction performance of these models in terms of their mean absolute errors (MAEs), root mean square errors (RMSEs) and correlation coefficients. The BP-ANN model exhibited higher prediction accuracy than the direct calibration model and the other prediction models. Finally, the proposed method was used to detect Cd 2+ in soil samples with satisfactory results.

  6. Overlay networks toward information networking

    CERN Document Server

    Tarkoma, Sasu

    2010-01-01

    With their ability to solve problems in massive information distribution and processing, while keeping scaling costs low, overlay systems represent a rapidly growing area of R&D with important implications for the evolution of Internet architecture. Inspired by the author's articles on content based routing, Overlay Networks: Toward Information Networking provides a complete introduction to overlay networks. Examining what they are and what kind of structures they require, the text covers the key structures, protocols, and algorithms used in overlay networks. It reviews the current state of th

  7. Progress of the COST Action TU1402 on the Quantification of the Value of Structural Health Monitoring

    DEFF Research Database (Denmark)

    Thöns, Sebastian; Limongelli, Maria Pina; Ivankovic, Ana Mandic

    2017-01-01

    This paper summarizes the development of Value of Structural Health Monitoring (SHM) Information analyses and introduces the development, objectives and approaches of the COST Action TU1402 on this topic. SHM research and engineering has been focused on the extraction of loading, degradation...... for its quantification. This challenge can be met with Value of SHM Information analyses facilitating that the SHM contribution to substantial benefits for life safety, economy and beyond can be may be quantified, demonstrated and utilized. However, Value of SHM Information analyses involve complex models...... encompassing the infrastructure and the SHM systems, their functionality and thus require the interaction of several research disciplines. For progressing on these points, a scientific networking and dissemination project namely the COST Action TU1402 has been initiated....

  8. Quantification, challenges and outlook of PV integration in the power system: a review by the European PV Technology Platform

    DEFF Research Database (Denmark)

    Alet, Pierre-Jean; Baccaro, Federica; De Felice, Matteo

    2015-01-01

    Integration in the power system has become a limiting factor to the further development of photovoltaics. Proper quantification is needed to evaluate both issues and solutions; the share of annual electricity demand is widely used but we found that some of the metrics which are related to power...... rather than energy better reflect the impact on networks. Barriers to wider deployment of PV into power grids can be split between local technical issues (voltage levels, harmonics distortion, reverse power flows and transformer loading) and system-wide issues (intermittency, reduction of system...... resilience). Many of the technical solutions to these issues rely on the inverters as actuators (e.g., for control of active and reactive power) or as interfaces (e.g., for local storage). This role requires further technical standardisation and needs to be taken into account in the planning of power...

  9. A "Toy" Model for Operational Risk Quantification using Credibility Theory

    OpenAIRE

    Hans B\\"uhlmann; Pavel V. Shevchenko; Mario V. W\\"uthrich

    2009-01-01

    To meet the Basel II regulatory requirements for the Advanced Measurement Approaches in operational risk, the bank's internal model should make use of the internal data, relevant external data, scenario analysis and factors reflecting the business environment and internal control systems. One of the unresolved challenges in operational risk is combining of these data sources appropriately. In this paper we focus on quantification of the low frequency high impact losses exceeding some high thr...

  10. Detection and quantification of beef and pork materials in meat products by duplex droplet digital PCR

    OpenAIRE

    Cai, Yicun; He, Yuping; Lv, Rong; Chen, Hongchao; Wang, Qiang; Pan, Liangwen

    2017-01-01

    Meat products often consist of meat from multiple animal species, and inaccurate food product adulteration and mislabeling can negatively affect consumers. Therefore, a cost-effective and reliable method for identification and quantification of animal species in meat products is required. In this study, we developed a duplex droplet digital PCR (dddPCR) detection and quantification system to simultaneously identify and quantify the source of meat in samples containing a mixture of beef (Bos t...

  11. The genetic interaction network of CCW12, a Saccharomyces cerevisiae gene required for cell wall integrity during budding and formation of mating projections

    Science.gov (United States)

    2011-01-01

    Background Mannoproteins construct the outer cover of the fungal cell wall. The covalently linked cell wall protein Ccw12p is an abundant mannoprotein. It is considered as crucial structural cell wall component since in baker's yeast the lack of CCW12 results in severe cell wall damage and reduced mating efficiency. Results In order to explore the function of CCW12, we performed a Synthetic Genetic Analysis (SGA) and identified genes that are essential in the absence of CCW12. The resulting interaction network identified 21 genes involved in cell wall integrity, chitin synthesis, cell polarity, vesicular transport and endocytosis. Among those are PFD1, WHI3, SRN2, PAC10, FEN1 and YDR417C, which have not been related to cell wall integrity before. We correlated our results with genetic interaction networks of genes involved in glucan and chitin synthesis. A core of genes essential to maintain cell integrity in response to cell wall stress was identified. In addition, we performed a large-scale transcriptional analysis and compared the transcriptional changes observed in mutant ccw12Δ with transcriptomes from studies investigating responses to constitutive or acute cell wall damage. We identified a set of genes that are highly induced in the majority of the mutants/conditions and are directly related to the cell wall integrity pathway and cell wall compensatory responses. Among those are BCK1, CHS3, EDE1, PFD1, SLT2 and SLA1 that were also identified in the SGA. In contrast, a specific feature of mutant ccw12Δ is the transcriptional repression of genes involved in mating. Physiological experiments substantiate this finding. Further, we demonstrate that Ccw12p is present at the cell periphery and highly concentrated at the presumptive budding site, around the bud, at the septum and at the tip of the mating projection. Conclusions The combination of high throughput screenings, phenotypic analyses and localization studies provides new insight into the function of Ccw

  12. Tentacle: distributed quantification of genes in metagenomes.

    Science.gov (United States)

    Boulund, Fredrik; Sjögren, Anders; Kristiansson, Erik

    2015-01-01

    In metagenomics, microbial communities are sequenced at increasingly high resolution, generating datasets with billions of DNA fragments. Novel methods that can efficiently process the growing volumes of sequence data are necessary for the accurate analysis and interpretation of existing and upcoming metagenomes. Here we present Tentacle, which is a novel framework that uses distributed computational resources for gene quantification in metagenomes. Tentacle is implemented using a dynamic master-worker approach in which DNA fragments are streamed via a network and processed in parallel on worker nodes. Tentacle is modular, extensible, and comes with support for six commonly used sequence aligners. It is easy to adapt Tentacle to different applications in metagenomics and easy to integrate into existing workflows. Evaluations show that Tentacle scales very well with increasing computing resources. We illustrate the versatility of Tentacle on three different use cases. Tentacle is written for Linux in Python 2.7 and is published as open source under the GNU General Public License (v3). Documentation, tutorials, installation instructions, and the source code are freely available online at: http://bioinformatics.math.chalmers.se/tentacle.

  13. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  14. Measurements of 4 Atmospheric Trace Gases Outside Homes Adjacent to a Multiwell Pad During Drilling, Hydraulic Fracturing, and Production Phases, Using Low-Cost Sensors and Artificial Neural Network Quantification Techniques

    Science.gov (United States)

    Casey, J. G.; Ilie, A. M. C.; Coffey, E.; Collier-Oxandale, A. M.; Hannigan, M.; Vaccaro, C.

    2017-12-01

    In Colorado and elsewhere in North America, the oil and gas production industry has been growing alongside and in the midst of increasing urban and rural populations. These coinciding trends have resulted in a growing number of people living in close proximity to petroleum production and processing activities, leading to potential public health impacts. Combustion-related emissions from heavy-duty diesel vehicle traffic, generators, compressors, and production stream flaring can potentially lead to locally enhanced levels of nitrogen oxides (NOx), carbon monoxide (CO), and carbon dioxide (CO2). Venting and fugitive emissions of production stream constituents can potentially lead to locally enhanced levels of methane (CH4) and volatile organic compounds (VOCs), some of which (like benzene) are known carcinogens. NOx and VOC emissions can also potentially increase local ozone (O3) production. After learning of a large new multiwell pad on the outskirts of Greeley, Colorado, we were able to quickly mobilize portable air quality monitors outfitted with low-cost gas sensors that respond to CH4, CO2, CO, and O3. The air quality monitors were installed outside homes adjacent to the new multiwell pad several weeks prior to the first spud date. An anemometer was also installed outside one of the homes in order to monitor wind speed and direction. Measurements continued during drilling, hydraulic fracturing, and production phases. The sensors were periodically collocated with reference instruments at a nearby regulatory air quality monitoring site towards calibration via field normalization and validation. Artificial Neural Networks were employed to map sensor signals to trace gas mole fractions during collocation periods. We present measurements of CH4, CO2, CO, and O3 in context with wellpad activities and local meteorology. CO and O3 observations are presented in context with regional measurements and National Ambient Air Quality Standards for each. Wind speed and

  15. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 2. Requirements Compilation and Analysis. Part 3. Characteristics Summaries and Network Analysis

    Science.gov (United States)

    1976-03-01

    DB DC DCT DDB DET DF DFS DML DMS DMSP DOD DS DSARC DT EDB EDS EG ESSA ETAC EWO Control and Reporting Post Cathode Ray Tube...National and Aviation Meteorological Facsimile Network NC - Network Control NCA - National Command Authority NCAR - National Center for Atmospheric

  16. Evidence for a Proton Transfer Network and a Required Persulfide-Bond-Forming Cysteine Residue in Ni-Containing Carbon Monoxide Dehydrogenases

    International Nuclear Information System (INIS)

    Eun Jin Kim; Jian Feng; Bramlett, Matthew R.; Lindahl, Paul A.

    2004-01-01

    OAK-B135 Carbon monoxide dehydrogenase from Moorella thermoacetica catalyzes the reversible oxidation of CO to CO2 at a nickel-iron-sulfur active-site called the C-cluster. Mutants of a proposed proton transfer pathway and of a cysteine residue recently found to form a persulfide bond with the C-cluster were characterized. Four semi-conserved histidine residues were individually mutated to alanine. His116 and His122 were essential to catalysis, while His113 and His119 attenuated catalysis but were not essential. Significant activity was ''rescued'' by a double mutant where His116 was replaced by Ala and His was also introduced at position 115. Activity was also rescued in double mutants where His122 was replaced by Ala and His was simultaneously introduced at either position 121 or 123. Activity was also ''rescued'' by replacing His with Cys at position 116. Mutation of conserved Lys587 near the C-cluster attenuated activity but did not eliminate it. Activity was virtually abolished in a double mutant where Lys587 and His113 were both changed to Ala. Mutations of conserved Asn284 also attenuated activity. These effects suggest the presence of a network of amino acid residues responsible for proton transfer rather than a single linear pathway. The Ser mutant of the persulfide-forming Cys316 was essentially inactive and displayed no EPR signals originating from the C-cluster. Electronic absorption and metal analysis suggests that the C-cluster is absent in this mutant. The persulfide bond appears to be essential for either the assembly or stability of the C-cluster, and/or for eliciting the redox chemistry of the C-cluster required for catalytic activity

  17. quantification of rain quantification of rain induced artifacts on digital

    African Journals Online (AJOL)

    eobe

    DSTV) ... satellite television, rain attenuation, digital artifacts, pixelation, rainfall rate. 1. ... screen and blocking are commonly observed in .... The precipitation data was collected using a self- ..... Networks: Comparison at Equatorial and Subtropical.

  18. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  19. Techniques of biomolecular quantification through AMS detection of radiocarbon

    International Nuclear Information System (INIS)

    Vogel, S.J.; Turteltaub, K.W.; Frantz, C.; Felton, J.S.; Gledhill, B.L.

    1992-01-01

    Accelerator mass spectrometry offers a large gain over scintillation counting in sensitivity for detecting radiocarbon in biomolecular tracing. Application of this sensitivity requires new considerations of procedures to extract or isolate the carbon fraction to be quantified, to inventory all carbon in the sample, to prepare graphite from the sample for use in the spectrometer, and to derive a meaningful quantification from the measured isotope ratio. These procedures need to be accomplished without contaminating the sample with radiocarbon, which may be ubiquitous in laboratories and on equipment previously used for higher dose, scintillation experiments. Disposable equipment, materials and surfaces are used to control these contaminations. Quantification of attomole amounts of labeled substances are possible through these techniques

  20. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  1. Critical points of DNA quantification by real-time PCR--effects of DNA extraction method and sample matrix on quantification of genetically modified organisms.

    Science.gov (United States)

    Cankar, Katarina; Stebih, Dejan; Dreo, Tanja; Zel, Jana; Gruden, Kristina

    2006-08-14

    Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary criterion by which to

  2. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Science.gov (United States)

    Cankar, Katarina; Štebih, Dejan; Dreo, Tanja; Žel, Jana; Gruden, Kristina

    2006-01-01

    Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs) quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was chosen as the primary

  3. Automatic Segmentation and Quantification of Filamentous Structures in Electron Tomography.

    Science.gov (United States)

    Loss, Leandro A; Bebis, George; Chang, Hang; Auer, Manfred; Sarkar, Purbasha; Parvin, Bahram

    2012-10-01

    Electron tomography is a promising technology for imaging ultrastructures at nanoscale resolutions. However, image and quantitative analyses are often hindered by high levels of noise, staining heterogeneity, and material damage either as a result of the electron beam or sample preparation. We have developed and built a framework that allows for automatic segmentation and quantification of filamentous objects in 3D electron tomography. Our approach consists of three steps: (i) local enhancement of filaments by Hessian filtering; (ii) detection and completion (e.g., gap filling) of filamentous structures through tensor voting; and (iii) delineation of the filamentous networks. Our approach allows for quantification of filamentous networks in terms of their compositional and morphological features. We first validate our approach using a set of specifically designed synthetic data. We then apply our segmentation framework to tomograms of plant cell walls that have undergone different chemical treatments for polysaccharide extraction. The subsequent compositional and morphological analyses of the plant cell walls reveal their organizational characteristics and the effects of the different chemical protocols on specific polysaccharides.

  4. Concept development and needs identification for intelligent network flow optimization (INFLO) : functional and performance requirements, and high-level data and communication needs.

    Science.gov (United States)

    2012-11-01

    The purpose of this project is to develop for the Intelligent Network Flow Optimization (INFLO), which is one collection (or bundle) of high-priority transformative applications identified by the United States Department of Transportation (USDOT) Mob...

  5. Estimation of parameter sensitivities for stochastic reaction networks

    KAUST Repository

    Gupta, Ankit

    2016-01-01

    Quantification of the effects of parameter uncertainty is an important and challenging problem in Systems Biology. We consider this problem in the context of stochastic models of biochemical reaction networks where the dynamics is described as a

  6. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  7. Cytochrome c oxidase subunit 1-based human RNA quantification to enhance mRNA profiling in forensic biology

    Directory of Open Access Journals (Sweden)

    Dong Zhao

    2017-01-01

    Full Text Available RNA analysis offers many potential applications in forensic science, and molecular identification of body fluids by analysis of cell-specific RNA markers represents a new technique for use in forensic cases. However, due to the nature of forensic materials that often admixed with nonhuman cellular components, human-specific RNA quantification is required for the forensic RNA assays. Quantification assay for human RNA has been developed in the present study with respect to body fluid samples in forensic biology. The quantitative assay is based on real-time reverse transcription-polymerase chain reaction of mitochondrial RNA cytochrome c oxidase subunit I and capable of RNA quantification with high reproducibility and a wide dynamic range. The human RNA quantification improves the quality of mRNA profiling in the identification of body fluids of saliva and semen because the quantification assay can exclude the influence of nonhuman components and reduce the adverse affection from degraded RNA fragments.

  8. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  9. Generation of structural MR images from amyloid PET: Application to MR-less quantification.

    Science.gov (United States)

    Choi, Hongyoon; Lee, Dong Soo

    2017-12-07

    Structural magnetic resonance (MR) images concomitantly acquired with PET images can provide crucial anatomical information for precise quantitative analysis. However, in the clinical setting, not all the subjects have corresponding MR. Here, we developed a model to generate structural MR images from amyloid PET using deep generative networks. We applied our model to quantification of cortical amyloid load without structural MR. Methods: We used florbetapir PET and structural MR data of Alzheimer's Disease Neuroimaging Initiative database. The generative network was trained to generate realistic structural MR images from florbetapir PET images. After the training, the model was applied to the quantification of cortical amyloid load. PET images were spatially normalized to the template space using the generated MR and then standardized uptake value ratio (SUVR) of the target regions was measured by predefined regions-of-interests. A real MR-based quantification was used as the gold standard to measure the accuracy of our approach. Other MR-less methods, a normal PET template-based, multi-atlas PET template-based and PET segmentation-based normalization/quantification methods, were also tested. We compared performance of quantification methods using generated MR with that of MR-based and MR-less quantification methods. Results: Generated MR images from florbetapir PET showed visually similar signal patterns to the real MR. The structural similarity index between real and generated MR was 0.91 ± 0.04. Mean absolute error of SUVR of cortical composite regions estimated by the generated MR-based method was 0.04±0.03, which was significantly smaller than other MR-less methods (0.29±0.12 for the normal PET-template, 0.12±0.07 for multiatlas PET-template and 0.08±0.06 for PET segmentation-based methods). Bland-Altman plots revealed that the generated MR-based SUVR quantification was the closest to the SUVR values estimated by the real MR-based method. Conclusion

  10. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  11. Quantification of the vocal folds’ dynamic displacements

    International Nuclear Information System (INIS)

    Hernández-Montes, María del Socorro; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-01-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ∼100–1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues. (paper)

  12. Quantification of the vocal folds’ dynamic displacements

    Science.gov (United States)

    del Socorro Hernández-Montes, María; Muñoz, Silvino; De La Torre, Manuel; Flores, Mauricio; Pérez, Carlos; Mendoza-Santoyo, Fernando

    2016-05-01

    Fast dynamic data acquisition techniques are required to investigate the motional behavior of the vocal folds (VFs) when they are subjected to a steady air-flow through the trachea. High-speed digital holographic interferometry (DHI) is a non-invasive full-field-of-view technique that has proved its usefulness to study rapid and non-repetitive object movements. Hence it is an ideal technique used here to measure VF displacements and vibration patterns at 2000 fps. Analyses from a set of 200 displacement images showed that VFs’ vibration cycles are established along their width (y) and length (x). Furthermore, the maximum deformation for the right and left VFs’ area may be quantified from these images, which in itself represents an important result in the characterization of this structure. At a controlled air pressure, VF displacements fall within the range ~100-1740 nm, with a calculated precision and accuracy that yields a variation coefficient of 1.91%. High-speed acquisition of full-field images of VFs and their displacement quantification are on their own significant data in the study of their functional and physiological behavior since voice quality and production depend on how they vibrate, i.e. their displacement amplitude and frequency. Additionally, the use of high speed DHI avoids prolonged examinations and represents a significant scientific and technological alternative contribution in advancing the knowledge and working mechanisms of these tissues.

  13. Standardless quantification methods in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)

    2014-11-01

    The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.

  14. Lung involvement quantification in chest radiographs; Quantificacao de comprometimento pulmonar em radiografias de torax

    Energy Technology Data Exchange (ETDEWEB)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M., E-mail: giacomini@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2014-12-15

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  15. The effect of fault ride-through requirements on voltage dips and post-fault voltage recovery in a Dutch distribution network

    NARCIS (Netherlands)

    Karaliolios, P.; Coster, E.J.; Slootweg, J.G.; Kling, W.L.

    2010-01-01

    In this paper the possibility to use Decentralized Generation (DG) units for voltage support in Distribution Networks during and after a Short Circuit (S/C) event is discussed. Two types of DG units will be examined, Combined Heat-Power (CHP) plants and Doubly-Fed Induction Generators (DFIG).

  16. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  17. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  18. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... alternative for the quantification of the disease' syndromes in regards to this crop. The result of these ..... parison of treatments such as cultivars or control measures and ..... Vascular discoloration and stem necrosis. 2.

  19. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  20. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  1. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  2. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  3. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  4. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  5. NEW MODEL FOR QUANTIFICATION OF ICT DEPENDABLE ORGANIZATIONS RESILIENCE

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2011-03-01

    Full Text Available Business environment today demands high reliable organizations in every segment to be competitive on the global market. Beside that, ICT sector is becoming irreplaceable in many fields of business, from the communication to the complex systems for process control and production. To fulfill those requirements and to develop further, many organizations worldwide are implementing business paradigm called - organizations resilience. Although resilience is well known term in many science fields, it is not well studied due to its complex nature. This paper is dealing with developing the new model for assessment and quantification of ICT dependable organizations resilience.

  6. Data management and communication networks for Man-Machine Interface System in Korea Advanced Liquid MEtal Reactor : its functionality and design requirements

    International Nuclear Information System (INIS)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon

    1998-01-01

    The DAta management and Communication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor(KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development and communication networks of KALIMER MMIS

  7. Data management and communication networks for Man-Machine Interface System in Korea Advanced Liquid MEtal Reactor : its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [KAERI, Taejon (Korea, Republic of)

    1998-05-01

    The DAta management and Communication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor(KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development and communication networks of KALIMER MMIS.

  8. Data management and communication networks for man-machine interface system in Korea Advanced LIquid MEtal Reactor : Its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1999-12-31

    The DAta management and COmmunication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor (KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development of Nuclear Power Plants, will be considered for designing data management and communication networks of KALIMER MMIS. 9 refs., 1 fig. (Author)

  9. Data management and communication networks for man-machine interface system in Korea Advanced LIquid MEtal Reactor : Its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The DAta management and COmmunication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor (KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development of Nuclear Power Plants, will be considered for designing data management and communication networks of KALIMER MMIS. 9 refs., 1 fig. (Author)

  10. Network management in motion. Part 6. Socket at sea requires attention; Netbeheer in beweging. Deel 5. Stopcontact op zee vereist aandacht

    Energy Technology Data Exchange (ETDEWEB)

    Van Beuge, M. [Energiegroep Simmons and Simmons, Rotterdam (Netherlands)

    2010-10-15

    In a series of articles various legal aspects of network management are discussed from a European and a Dutch perspective. In this sixth part the option to install so-called sockets at sea. [Dutch] In een serie artikelen worden diverse juridische aspecten van netbeheer besproken vanuit Europees en Nederlands perspectief. In deel 6 wordt de mogelijkheid besproken van het aanleggen van de zogenaamde stopcontacten op zee.

  11. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN)

    DEFF Research Database (Denmark)

    Kubiak, Christine; de Andres-Trelles, Fernando; Kuchinke, Wolfgang

    2009-01-01

    in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN) developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching...... with cell therapy, etc.); diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.); and epidemiology studies. Our classification...

  12. Real-Time PCR Quantification of Chloroplast DNA Supports DNA Barcoding of Plant Species.

    Science.gov (United States)

    Kikkawa, Hitomi S; Tsuge, Kouichiro; Sugita, Ritsuko

    2016-03-01

    Species identification from extracted DNA is sometimes needed for botanical samples. DNA quantification is required for an accurate and effective examination. If a quantitative assay provides unreliable estimates, a higher quantity of DNA than the estimated amount may be used in additional analyses to avoid failure to analyze samples from which extracting DNA is difficult. Compared with conventional methods, real-time quantitative PCR (qPCR) requires a low amount of DNA and enables quantification of dilute DNA solutions accurately. The aim of this study was to develop a qPCR assay for quantification of chloroplast DNA from taxonomically diverse plant species. An absolute quantification method was developed using primers targeting the ribulose-1,5-bisphosphate carboxylase/oxygenase large subunit (rbcL) gene using SYBR Green I-based qPCR. The calibration curve was generated using the PCR amplicon as the template. DNA extracts from representatives of 13 plant families common in Japan. This demonstrates that qPCR analysis is an effective method for quantification of DNA from plant samples. The results of qPCR assist in the decision-making will determine the success or failure of DNA analysis, indicating the possibility of optimization of the procedure for downstream reactions.

  13. Uncovering the underlying physical mechanisms of biological systems via quantification of landscape and flux

    International Nuclear Information System (INIS)

    Xu Li; Chu Xiakun; Yan Zhiqiang; Zheng Xiliang; Zhang Kun; Zhang Feng; Yan Han; Wu Wei; Wang Jin

    2016-01-01

    In this review, we explore the physical mechanisms of biological processes such as protein folding and recognition, ligand binding, and systems biology, including cell cycle, stem cell, cancer, evolution, ecology, and neural networks. Our approach is based on the landscape and flux theory for nonequilibrium dynamical systems. This theory provides a unifying principle and foundation for investigating the underlying mechanisms and physical quantification of biological systems. (topical review)

  14. Next Generation Social Networks

    DEFF Research Database (Denmark)

    Sørensen, Lene Tolstrup; Skouby, Knud Erik

    2008-01-01

    different online networks for communities of people who share interests or individuals who presents themselves through user produced content is what makes up the social networking of today. The purpose of this paper is to discuss perceived user requirements to the next generation social networks. The paper...

  15. Data center networks and network architecture

    Science.gov (United States)

    Esaki, Hiroshi

    2014-02-01

    This paper discusses and proposes the architectural framework, which is for data center networks. The data center networks require new technical challenges, and it would be good opportunity to change the functions, which are not need in current and future networks. Based on the observation and consideration on data center networks, this paper proposes; (i) Broadcast-free layer 2 network (i.e., emulation of broadcast at the end-node), (ii) Full-mesh point-to-point pipes, and (iii) IRIDES (Invitation Routing aDvertisement for path Engineering System).

  16. A Network Traffic Control Enhancement Approach over Bluetooth Networks

    DEFF Research Database (Denmark)

    Son, L.T.; Schiøler, Henrik; Madsen, Ole Brun

    2003-01-01

    This paper analyzes network traffic control issues in Bluetooth data networks as convex optimization problem. We formulate the problem of maximizing of total network flows and minimizing the costs of flows. An adaptive distributed network traffic control scheme is proposed as an approximated solu...... as capacity limitations and flow requirements in the network. Simulation shows that the performance of Bluetooth networks could be improved by applying the adaptive distributed network traffic control scheme...... solution of the stated optimization problem that satisfies quality of service requirements and topologically induced constraints in Bluetooth networks, such as link capacity and node resource limitations. The proposed scheme is decentralized and complies with frequent changes of topology as well......This paper analyzes network traffic control issues in Bluetooth data networks as convex optimization problem. We formulate the problem of maximizing of total network flows and minimizing the costs of flows. An adaptive distributed network traffic control scheme is proposed as an approximated...

  17. Network Characterization Service (NCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Guojun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, George [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Crowley, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Agarwal, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2001-06-06

    Distributed applications require information to effectively utilize the network. Some of the information they require is the current and maximum bandwidth, current and minimum latency, bottlenecks, burst frequency, and congestion extent. This type of information allows applications to determine parameters like optimal TCP buffer size. In this paper, we present a cooperative information-gathering tool called the network characterization service (NCS). NCS runs in user space and is used to acquire network information. Its protocol is designed for scalable and distributed deployment, similar to DNS. Its algorithms provide efficient, speedy and accurate detection of bottlenecks, especially dynamic bottlenecks. On current and future networks, dynamic bottlenecks do and will affect network performance dramatically.

  18. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  19. Labeling the pulmonary arterial tree in CT images for automatic quantification of pulmonary embolism

    NARCIS (Netherlands)

    Peters, R.J.M.; Marquering, H.A.; Dogan, H.; Hendriks, E.A.; De Roos, A.; Reiber, J.H.C.; Stoel, B.C.

    2007-01-01

    Contrast-enhanced CT Angiography has become an accepted diagnostic tool for detecting Pulmonary Embolism (PE). The CT obstruction index proposed by Qanadli, which is based on the number of obstructed arterial segments, enables the quantification of PE severity. Because the required manual

  20. DETECTION AND QUANTIFICATION OF COW FECAL POLLUTION WITH REAL-TIME PCR

    Science.gov (United States)

    Assessment of health risk and fecal bacteria loads associated with cow fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for enumeration of two recently described cow-specific g...

  1. Repeatability of Bolus Kinetics Ultrasound Perfusion Imaging for the Quantification of Cerebral Blood Flow

    NARCIS (Netherlands)

    Vinke, Elisabeth J.; Eyding, Jens; de Korte, Chris L.; Slump, Cornelis H.; van der Hoeven, Johannes G.; Hoedemaekers, Cornelia W.E.

    2017-01-01

    Ultrasound perfusion imaging (UPI) can be used for the quantification of cerebral perfusion. In a neuro-intensive care setting, repeated measurements are required to evaluate changes in cerebral perfusion and monitor therapy. The aim of this study was to determine the repeatability of UPI in

  2. Quantification of Discrete Oxide and Sulfur Layers on Sulfur-Passivated InAs by XPS

    National Research Council Canada - National Science Library

    Petrovykh, D. Y; Sullivan, J. M; Whitman, L. J

    2005-01-01

    .... The S-passivated InAs(001) surface can be modeled as a sulfur-indium-arsenic layer-cake structure, such that characterization requires quantification of both arsenic oxide and sulfur layers that are at most a few monolayers thick...

  3. Quantification of trace-level DNA by real-time whole genome amplification.

    Science.gov (United States)

    Kang, Min-Jung; Yu, Hannah; Kim, Sook-Kyung; Park, Sang-Ryoul; Yang, Inchul

    2011-01-01

    Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR) for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES), an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  4. Recent advances on failure and recovery in networks of networks

    International Nuclear Information System (INIS)

    Shekhtman, Louis M.; Danziger, Michael M.; Havlin, Shlomo

    2016-01-01

    Until recently, network science has focused on the properties of single isolated networks that do not interact or depend on other networks. However it has now been recognized that many real-networks, such as power grids, transportation systems, and communication infrastructures interact and depend on other networks. Here, we will present a review of the framework developed in recent years for studying the vulnerability and recovery of networks composed of interdependent networks. In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This is also the case when some nodes, like for example certain people, play a role in two networks, i.e. in a multiplex. Dependency relations may act recursively and can lead to cascades of failures concluding in sudden fragmentation of the system. We review the analytical solutions for the critical threshold and the giant component of a network of n interdependent networks. The general theory and behavior of interdependent networks has many novel features that are not present in classical network theory. Interdependent networks embedded in space are significantly more vulnerable compared to non-embedded networks. In particular, small localized attacks may lead to cascading failures and catastrophic consequences. Finally, when recovery of components is possible, global spontaneous recovery of the networks and hysteresis phenomena occur. The theory developed for this process points to an optimal repairing strategy for a network of networks. Understanding realistic effects present in networks of networks is required in order to move towards determining system vulnerability.

  5. Overview of hybrid subspace methods for uncertainty quantification, sensitivity analysis

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Bang, Youngsuk; Wang, Congjian

    2013-01-01

    Highlights: ► We overview the state-of-the-art in uncertainty quantification and sensitivity analysis. ► We overview new developments in above areas using hybrid methods. ► We give a tutorial introduction to above areas and the new developments. ► Hybrid methods address the explosion in dimensionality in nonlinear models. ► Representative numerical experiments are given. -- Abstract: The role of modeling and simulation has been heavily promoted in recent years to improve understanding of complex engineering systems. To realize the benefits of modeling and simulation, concerted efforts in the areas of uncertainty quantification and sensitivity analysis are required. The manuscript intends to serve as a pedagogical presentation of the material to young researchers and practitioners with little background on the subjects. We believe this is important as the role of these subjects is expected to be integral to the design, safety, and operation of existing as well as next generation reactors. In addition to covering the basics, an overview of the current state-of-the-art will be given with particular emphasis on the challenges pertaining to nuclear reactor modeling. The second objective will focus on presenting our own development of hybrid subspace methods intended to address the explosion in the computational overhead required when handling real-world complex engineering systems.

  6. Mixture quantification using PLS in plastic scintillation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bagan, H.; Tarancon, A.; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F., E-mail: jfgarcia@ub.ed [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2011-06-15

    This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ({sup 241}Am, {sup 137}Cs and {sup 90}Sr/{sup 90}Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter.

  7. Spatial gene expression quantification: a tool for analysis of in situ hybridizations in sea anemone Nematostella vectensis

    Directory of Open Access Journals (Sweden)

    Botman Daniel

    2012-10-01

    Full Text Available Abstract Background Spatial gene expression quantification is required for modeling gene regulation in developing organisms. The fruit fly Drosophila melanogaster is the model system most widely applied for spatial gene expression analysis due to its unique embryonic properties: the shape does not change significantly during its early cleavage cycles and most genes are differentially expressed along a straight axis. This system of development is quite exceptional in the animal kingdom. In the sea anemone Nematostella vectensis the embryo changes its shape during early development; there are cell divisions and cell movement, like in most other metazoans. Nematostella is an attractive case study for spatial gene expression since its transparent body wall makes it accessible to various imaging techniques. Findings Our new quantification method produces standardized gene expression profiles from raw or annotated Nematostella in situ hybridizations by measuring the expression intensity along its cell layer. The procedure is based on digital morphologies derived from high-resolution fluorescence pictures. Additionally, complete descriptions of nonsymmetric expression patterns have been constructed by transforming the gene expression images into a three-dimensional representation. Conclusions We created a standard format for gene expression data, which enables quantitative analysis of in situ hybridizations from embryos with various shapes in different developmental stages. The obtained expression profiles are suitable as input for optimization of gene regulatory network models, and for correlation analysis of genes from dissimilar Nematostella morphologies. This approach is potentially applicable to many other metazoan model organisms and may also be suitable for processing data from three-dimensional imaging techniques.

  8. The Regulation of Cytokine Networks in Hippocampal CA1 Differentiates Extinction from Those Required for the Maintenance of Contextual Fear Memory after Recall

    Science.gov (United States)

    Scholz, Birger; Doidge, Amie N.; Barnes, Philip; Hall, Jeremy; Wilkinson, Lawrence S.; Thomas, Kerrie L.

    2016-01-01

    We investigated the distinctiveness of gene regulatory networks in CA1 associated with the extinction of contextual fear memory (CFM) after recall using Affymetrix GeneChip Rat Genome 230 2.0 Arrays. These data were compared to previously published retrieval and reconsolidation-attributed, and consolidation datasets. A stringent dual normalization and pareto-scaled orthogonal partial least-square discriminant multivariate analysis together with a jack-knifing-based cross-validation approach was used on all datasets to reduce false positives. Consolidation, retrieval and extinction were correlated with distinct patterns of gene expression 2 hours later. Extinction-related gene expression was most distinct from the profile accompanying consolidation. A highly specific feature was the discrete regulation of neuroimmunological gene expression associated with retrieval and extinction. Immunity–associated genes of the tyrosine kinase receptor TGFβ and PDGF, and TNF families’ characterized extinction. Cytokines and proinflammatory interleukins of the IL-1 and IL-6 families were enriched with the no-extinction retrieval condition. We used comparative genomics to predict transcription factor binding sites in proximal promoter regions of the retrieval-regulated genes. Retrieval that does not lead to extinction was associated with NF-κB-mediated gene expression. We confirmed differential NF-κBp65 expression, and activity in all of a representative sample of our candidate genes in the no-extinction condition. The differential regulation of cytokine networks after the acquisition and retrieval of CFM identifies the important contribution that neuroimmune signalling plays in normal hippocampal function. Further, targeting cytokine signalling upon retrieval offers a therapeutic strategy to promote extinction mechanisms in human disorders characterised by dysregulation of associative memory. PMID:27224427

  9. Using networking and communications software in business

    CERN Document Server

    McBride, PK

    2014-01-01

    Using Networking and Communications Software in Business covers the importance of networks in a business firm, the benefits of computer communications within a firm, and the cost-benefit in putting up networks in businesses. The book is divided into six parts. Part I looks into the nature and varieties of networks, networking standards, and network software. Part II discusses the planning of a networked system, which includes analyzing the requirements for the network system, the hardware for the network, and network management. The installation of the network system and the network managemen

  10. Local area networking handbook

    OpenAIRE

    O'Hara, Patricia A.

    1990-01-01

    Approved for public release; distribution is unlimited. This thesis provides Navy shore based commands with sufficient information on local area networking to (1) decide if they need a LAN, (2) determine what their networking requirements are, and (3) select a LAN that satisfies their requirements. LAN topologies, transmission media, and medium access methods are described. In addition, the OSI reference model for computer networking and the IEEE 802 LAN standards are explained in detail. ...

  11. Techniques for quantification of liver fat in risk stratification of diabetics

    International Nuclear Information System (INIS)

    Kuehn, J.P.; Spoerl, M.C.; Mahlke, C.; Hegenscheid, K.

    2015-01-01

    Fatty liver disease plays an important role in the development of type 2 diabetes. Accurate techniques for detection and quantification of liver fat are essential for clinical diagnostics. Chemical shift-encoded magnetic resonance imaging (MRI) is a simple approach to quantify liver fat content. Liver fat quantification using chemical shift-encoded MRI is influenced by several bias factors, such as T2* decay, T1 recovery and the multispectral complexity of fat. The confounder corrected proton density fat fraction is a simple approach to quantify liver fat with comparable results independent of the software and hardware used. The proton density fat fraction is an accurate biomarker for assessment of liver fat. An accurate and reproducible quantification of liver fat using chemical shift-encoded MRI requires a calculation of the proton density fat fraction. (orig.) [de

  12. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  13. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  14. Estimation of expected short-circuit current levels in and circuit-breaker requirements for the 330 to 750 kV networks of the southern integrated power grid

    Energy Technology Data Exchange (ETDEWEB)

    Krivushkin, L.F.; Gorazeeva, T.F.

    1978-08-01

    Studies were made in order to project the operating levels in the Southern Integrated Power Grid to the year 2000. The short-circuit current levels and, the requirements which circuit breakers will have to meet are estimated. A gradual transition from 330 to 750 kV generation is foreseen, with 330 kV networks remaining only for a purely distribution service. The number of 330 kV line hookups and the number of circuit breakers at nodal points (stations and substations) will not change significantly, they will account for 40% of all circuit breakers installed in 25% of all nodal points. Short-circuit currents are expected to reach the 46 kA level in 750 kV networks and 63 kA (standing wave voltage 1.5 to 2.5 kV/microsecond) in 330 kV networks. These are the ratings of circuit breakers; of the 63 kA ones 150 will be needed by 1980--1990 and 400 by 1990--2000. It will also be eventually worthwhile to install circuit breakers with a 63 kA-750 kV rating.

  15. Learning Networks, Networked Learning

    NARCIS (Netherlands)

    Sloep, Peter; Berlanga, Adriana

    2010-01-01

    Sloep, P. B., & Berlanga, A. J. (2011). Learning Networks, Networked Learning [Redes de Aprendizaje, Aprendizaje en Red]. Comunicar, XIX(37), 55-63. Retrieved from http://dx.doi.org/10.3916/C37-2011-02-05

  16. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    Science.gov (United States)

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  17. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  18. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  19. Metal Stable Isotope Tagging: Renaissance of Radioimmunoassay for Multiplex and Absolute Quantification of Biomolecules.

    Science.gov (United States)

    Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong

    2016-05-17

    is the development and application of the mass cytometer, which fully exploited the multiplexing potential of metal stable isotope tagging. It realized the simultaneous detection of dozens of parameters in single cells, accurate immunophenotyping in cell populations, through modeling of intracellular signaling network and undoubted discrimination of function and connection of cell subsets. Metal stable isotope tagging has great potential applications in hematopoiesis, immunology, stem cells, cancer, and drug screening related research and opened a post-fluorescence era of cytometry. Herein, we review the development of biomolecule quantification using metal stable isotope tagging. Particularly, the power of multiplex and absolute quantification is demonstrated. We address the advantages, applicable situations, and limitations of metal stable isotope tagging strategies and propose suggestions for future developments. The transfer of enzymatic or fluorescent tagging to metal stable isotope tagging may occur in many aspects of biological and clinical practices in the near future, just as the revolution from radioactive isotope tagging to fluorescent tagging happened in the past.

  20. Quantification of biopharmaceuticals and biomarkers in complex biological matrices: a comparison of liquid chromatography coupled to tandem mass spectrometry and ligand binding assays

    NARCIS (Netherlands)

    Bults, Peter; van de Merbel, Nico C; Bischoff, Rainer

    2015-01-01

    The quantification of proteins (biopharmaceuticals or biomarkers) in complex biological samples such as blood plasma requires exquisite sensitivity and selectivity, as all biological matrices contain myriads of proteins that are all made of the same 20 proteinogenic amino acids, notwithstanding

  1. Jasmonoyl-l-Isoleucine Coordinates Metabolic Networks Required for Anthesis and Floral Attractant Emission in Wild Tobacco (Nicotiana attenuata)[C][W][OPEN

    Science.gov (United States)

    Stitz, Michael; Hartl, Markus; Baldwin, Ian T.; Gaquerel, Emmanuel

    2014-01-01

    Jasmonic acid and its derivatives (jasmonates [JAs]) play central roles in floral development and maturation. The binding of jasmonoyl-l-isoleucine (JA-Ile) to the F-box of CORONATINE INSENSITIVE1 (COI1) is required for many JA-dependent physiological responses, but its role in anthesis and pollinator attraction traits remains largely unexplored. Here, we used the wild tobacco Nicotiana attenuata, which develops sympetalous flowers with complex pollination biology, to examine the coordinating function of JA homeostasis in the distinct metabolic processes that underlie flower maturation, opening, and advertisement to pollinators. From combined transcriptomic, targeted metabolic, and allometric analyses of transgenic N. attenuata plants for which signaling deficiencies were complemented with methyl jasmonate, JA-Ile, and its functional homolog, coronatine (COR), we demonstrate that (1) JA-Ile/COR-based signaling regulates corolla limb opening and a JA-negative feedback loop; (2) production of floral volatiles (night emissions of benzylacetone) and nectar requires JA-Ile/COR perception through COI1; and (3) limb expansion involves JA-Ile-induced changes in limb fresh mass and carbohydrate metabolism. These findings demonstrate a master regulatory function of the JA-Ile/COI1 duet for the main function of a sympetalous corolla, that of advertising for and rewarding pollinator services. Flower opening, by contrast, requires JA-Ile signaling-dependent changes in primary metabolism, which are not compromised in the COI1-silenced RNA interference line used in this study. PMID:25326292

  2. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  3. Recurrence quantification analysis in Liu's attractor

    International Nuclear Information System (INIS)

    Balibrea, Francisco; Caballero, M. Victoria; Molera, Lourdes

    2008-01-01

    Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero

  4. Quantification of coating aging using impedance measurements

    NARCIS (Netherlands)

    Westing, E.P.M. van; Weijde, D.H. van der; Vreijling, M.P.W.; Ferrari, G.M.; Wit, J.H.W. de

    1998-01-01

    This chapter shows the application results of a novel approach to quantify the ageing of organic coatings using impedance measurements. The ageing quantification is based on the typical impedance behaviour of barrier coatings in immersion. This immersion behaviour is used to determine the limiting

  5. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Watanabe, Shunzo; Ooyama, Hiroshi; Hojo, Kei; Tasaki, Hiroichi; Hanazono, Toshihide; Sato, Tokijiro; Metoki, Hirobumi; Totsuka, Motokichi; Oosumi, Noboru.

    1987-01-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  6. Quantification analysis of CT for aphasic patients

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Ooyama, H.; Hojo, K.; Tasaki, H.; Hanazono, T.; Sato, T.; Metoki, H.; Totsuka, M.; Oosumi, N.

    1987-02-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis).

  7. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  8. Quantification of glycyrrhizin biomarker in Glycyrrhiza glabra ...

    African Journals Online (AJOL)

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 ...

  9. Noninvasive Quantification of Pancreatic Fat in Humans

    OpenAIRE

    Lingvay, Ildiko; Esser, Victoria; Legendre, Jaime L.; Price, Angela L.; Wertz, Kristen M.; Adams-Huet, Beverley; Zhang, Song; Unger, Roger H.; Szczepaniak, Lidia S.

    2009-01-01

    Objective: To validate magnetic resonance spectroscopy (MRS) as a tool for non-invasive quantification of pancreatic triglyceride (TG) content and to measure the pancreatic TG content in a diverse human population with a wide range of body mass index (BMI) and glucose control.

  10. Cues, quantification, and agreement in language comprehension.

    Science.gov (United States)

    Tanner, Darren; Bulkes, Nyssa Z

    2015-12-01

    We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.

  11. Network-Centric Applications and Tactical Networks

    National Research Council Canada - National Science Library

    Krout, Timothy; Durbano, Steven; Shearer, Ruth

    2003-01-01

    .... Command and control applications will always require communications capabilities. There are numerous examples of command and control applications that have been developed without adequate attention to the realities of tactical networks...

  12. Network topology analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Kalb, Jeffrey L.; Lee, David S.

    2008-01-01

    Emerging high-bandwidth, low-latency network technology has made network-based architectures both feasible and potentially desirable for use in satellite payload architectures. The selection of network topology is a critical component when developing these multi-node or multi-point architectures. This study examines network topologies and their effect on overall network performance. Numerous topologies were reviewed against a number of performance, reliability, and cost metrics. This document identifies a handful of good network topologies for satellite applications and the metrics used to justify them as such. Since often multiple topologies will meet the requirements of the satellite payload architecture under development, the choice of network topology is not easy, and in the end the choice of topology is influenced by both the design characteristics and requirements of the overall system and the experience of the developer.

  13. Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule and Other Revisions to Part B for CY 2017; Medicare Advantage Bid Pricing Data Release; Medicare Advantage and Part D Medical Loss Ratio Data Release; Medicare Advantage Provider Network Requirements; Expansion of Medicare Diabetes Prevention Program Model; Medicare Shared Savings Program Requirements. Final rule.

    Science.gov (United States)

    2016-11-15

    This major final rule addresses changes to the physician fee schedule and other Medicare Part B payment policies, such as changes to the Value Modifier, to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services, as well as changes in the statute. This final rule also includes changes related to the Medicare Shared Savings Program, requirements for Medicare Advantage Provider Networks, and provides for the release of certain pricing data from Medicare Advantage bids and of data from medical loss ratio reports submitted by Medicare health and drug plans. In addition, this final rule expands the Medicare Diabetes Prevention Program model.

  14. Sludge quantification at water treatment plant and its management scenario.

    Science.gov (United States)

    Ahmad, Tarique; Ahmad, Kafeel; Alam, Mehtab

    2017-08-15

    Large volume of sludge is generated at the water treatment plants during the purification of surface water for potable supplies. Handling and disposal of sludge require careful attention from civic bodies, plant operators, and environmentalists. Quantification of the sludge produced at the treatment plants is important to develop suitable management strategies for its economical and environment friendly disposal. Present study deals with the quantification of sludge using empirical relation between turbidity, suspended solids, and coagulant dosing. Seasonal variation has significant effect on the raw water quality received at the water treatment plants so forth sludge generation also varies. Yearly production of the sludge in a water treatment plant at Ghaziabad, India, is estimated to be 29,700 ton. Sustainable disposal of such a quantity of sludge is a challenging task under stringent environmental legislation. Several beneficial reuses of sludge in civil engineering and constructional work have been identified globally such as raw material in manufacturing cement, bricks, and artificial aggregates, as cementitious material, and sand substitute in preparing concrete and mortar. About 54 to 60% sand, 24 to 28% silt, and 16% clay constitute the sludge generated at the water treatment plant under investigation. Characteristics of the sludge are found suitable for its potential utilization as locally available construction material for safe disposal. An overview of the sustainable management scenario involving beneficial reuses of the sludge has also been presented.

  15. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  16. Complex Empiricism and the Quantification of Uncertainty in Paleoclimate Reconstructions

    Science.gov (United States)

    Brumble, K. C.

    2014-12-01

    Because the global climate cannot be observed directly, and because of vast and noisy data sets, climate science is a rich field to study how computational statistics informs what it means to do empirical science. Traditionally held virtues of empirical science and empirical methods like reproducibility, independence, and straightforward observation are complicated by representational choices involved in statistical modeling and data handling. Examining how climate reconstructions instantiate complicated empirical relationships between model, data, and predictions reveals that the path from data to prediction does not match traditional conceptions of empirical inference either. Rather, the empirical inferences involved are "complex" in that they require articulation of a good deal of statistical processing wherein assumptions are adopted and representational decisions made, often in the face of substantial uncertainties. Proxy reconstructions are both statistical and paleoclimate science activities aimed at using a variety of proxies to reconstruct past climate behavior. Paleoclimate proxy reconstructions also involve complex data handling and statistical refinement, leading to the current emphasis in the field on the quantification of uncertainty in reconstructions. In this presentation I explore how the processing needed for the correlation of diverse, large, and messy data sets necessitate the explicit quantification of the uncertainties stemming from wrangling proxies into manageable suites. I also address how semi-empirical pseudo-proxy methods allow for the exploration of signal detection in data sets, and as intermediary steps for statistical experimentation.

  17. Class network routing

    Science.gov (United States)

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  18. Virtualized Network Control (VNC)

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Thomas [Univ. of Southern California, Los Angeles, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-01-31

    The focus of this project was on the development of a "Network Service Plane" as an abstraction model for the control and provisioning of multi-layer networks. The primary motivation for this work were the requirements of next generation networked applications which will need to access advanced networking as a first class resource at the same level as compute and storage resources. A new class of "Intelligent Network Services" were defined in order to facilitate the integration of advanced network services into application specific workflows. This new class of network services are intended to enable real-time interaction between the application co-scheduling algorithms and the network for the purposes of workflow planning, real-time resource availability identification, scheduling, and provisioning actions.

  19. Uptake of a Dashboard Designed to Give Realtime Feedback to a Sentinel Network About Key Data Required for Influenza Vaccine Effectiveness Studies.

    Science.gov (United States)

    Pathirannehelage, Sameera; Kumarapeli, Pushpa; Byford, Rachel; Yonova, Ivelina; Ferreira, Filipa; de Lusignan, Simon

    2018-01-01

    Dashboards are technologies that bringing together a range of data sources for observational or analytical purposes. We have created a customised dashboard that includes all the key data elements required for monitoring flu vaccine effectiveness (FVE). This delivers a unique dashboard for each primary care provider (general practice) providing data to the Royal College of General Practitioners (RCGP) Research and Surveillance Centre (RSC), one of the oldest European surveillance systems. These FVE studies use a test negative case control (TNCC) design. TNCC requires knowledge of practice denominator; vaccine exposure, and results of influenza virology swabs carried out to identify in an influenza-like-illness (ILI), a clinical diagnosis, really is influenza. The dashboard displays the denominator uploaded each week into the surveillance system, compared with the nationally known practice size (providing face-validity for the denominator); it identifies those exposed to the vaccine (by age group and risk category) and virology specimens taken and missed opportunities for surveillance (again by category). All sentinel practices can access in near real time (4 working days in areas) their rates of vaccine exposure and swabs conducted. Initial feedback is positive; 80% (32/40) practices responded positively.

  20. Bayesian Proteoform Modeling Improves Protein Quantification of Global Proteomic Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Datta, Susmita; Payne, Samuel H.; Kang, Jiyun; Bramer, Lisa M.; Nicora, Carrie D.; Shukla, Anil K.; Metz, Thomas O.; Rodland, Karin D.; Smith, Richard D.; Tardiff, Mark F.; McDermott, Jason E.; Pounds, Joel G.; Waters, Katrina M.

    2014-12-01

    As the capability of mass spectrometry-based proteomics has matured, tens of thousands of peptides can be measured simultaneously, which has the benefit of offering a systems view of protein expression. However, a major challenge is that with an increase in throughput, protein quantification estimation from the native measured peptides has become a computational task. A limitation to existing computationally-driven protein quantification methods is that most ignore protein variation, such as alternate splicing of the RNA transcript and post-translational modifications or other possible proteoforms, which will affect a significant fraction of the proteome. The consequence of this assumption is that statistical inference at the protein level, and consequently downstream analyses, such as network and pathway modeling, have only limited power for biomarker discovery. Here, we describe a Bayesian model (BP-Quant) that uses statistically derived peptides signatures to identify peptides that are outside the dominant pattern, or the existence of multiple over-expressed patterns to improve relative protein abundance estimates. It is a research-driven approach that utilizes the objectives of the experiment, defined in the context of a standard statistical hypothesis, to identify a set of peptides exhibiting similar statistical behavior relating to a protein. This approach infers that changes in relative protein abundance can be used as a surrogate for changes in function, without necessarily taking into account the effect of differential post-translational modifications, processing, or splicing in altering protein function. We verify the approach using a dilution study from mouse plasma samples and demonstrate that BP-Quant achieves similar accuracy as the current state-of-the-art methods at proteoform identification with significantly better specificity. BP-Quant is available as a MatLab ® and R packages at https://github.com/PNNL-Comp-Mass-Spec/BP-Quant.

  1. Islanded operation of distributed networks

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-07-01

    This report summarises the findings of a study to investigate the regulatory, commercial and technical risks and benefits associated with the operation of distributed generation to power an islanded section of distributed network. A review of published literature was carried out, and UK generators were identified who could operate as part of an island network under the existing technical, regulatory, and safety framework. Agreement on case studies for consideration with distributed network operators (DNOs) is discussed as well as the quantification of the risks, benefits and costs of islanding, and the production of a case implementation plan for each case study. Technical issues associated with operating sections of network in islanded mode are described, and impacts of islanding on trading and settlement, and technical and commercial modelling are explored.

  2. Islanded operation of distributed networks

    International Nuclear Information System (INIS)

    2005-01-01

    This report summarises the findings of a study to investigate the regulatory, commercial and technical risks and benefits associated with the operation of distributed generation to power an islanded section of distributed network. A review of published literature was carried out, and UK generators were identified who could operate as part of an island network under the existing technical, regulatory, and safety framework. Agreement on case studies for consideration with distributed network operators (DNOs) is discussed as well as the quantification of the risks, benefits and costs of islanding, and the production of a case implementation plan for each case study. Technical issues associated with operating sections of network in islanded mode are described, and impacts of islanding on trading and settlement, and technical and commercial modelling are explored

  3. Cascading Failures and Recovery in Networks of Networks

    Science.gov (United States)

    Havlin, Shlomo

    Network science have been focused on the properties of a single isolated network that does not interact or depends on other networks. In reality, many real-networks, such as power grids, transportation and communication infrastructures interact and depend on other networks. I will present a framework for studying the vulnerability and the recovery of networks of interdependent networks. In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This is also the case when some nodes like certain locations play a role in two networks -multiplex. This may happen recursively and can lead to a cascade of failures and to a sudden fragmentation of the system. I will present analytical solutions for the critical threshold and the giant component of a network of n interdependent networks. I will show, that the general theory has many novel features that are not present in the classical network theory. When recovery of components is possible global spontaneous recovery of the networks and hysteresis phenomena occur and the theory suggests an optimal repairing strategy of system of systems. I will also show that interdependent networks embedded in space are significantly more vulnerable compared to non embedded networks. In particular, small localized attacks may lead to cascading failures and catastrophic consequences.Thus, analyzing data of real network of networks is highly required to understand the system vulnerability. DTRA, ONR, Israel Science Foundation.

  4. vhv supply networks, problems of network structure

    Energy Technology Data Exchange (ETDEWEB)

    Raimbault, J

    1966-04-01

    The present and future power requirements of the Paris area and the structure of the existing networks are discussed. The various limitations that will have to be allowed for to lay down the structure of a regional transmission network leading in the power of the large national transmission network to within the Paris built up area are described. The theoretical solution that has been adopted, and the features of its final achievement, which is planned for about the year 2000, and the intermediate stages are given. The problem of the structure of the National Power Transmission network which is to supply the regional network was studied. To solve this problem, a 730 kV voltage network will have to be introduced.

  5. Using an ontology for network attack planning

    CSIR Research Space (South Africa)

    Van Heerden, R

    2016-09-01

    Full Text Available The modern complexity of network attacks and their counter-measures (cyber operations) requires detailed planning. This paper presents a Network Attack Planning ontology which is aimed at providing support for planning such network operations within...

  6. Quantification of rural livelihood dynamics

    DEFF Research Database (Denmark)

    Walelign, Solomon Zena

    role in lifting poor out poverty which could be due to restricted access to more remunerative environmental resources, (ii) the developed approach for livelihood clustering (combining household income and asset variables using regression models) outperform both existing income and asset approaches (iii......Improved understanding of rural livelihoods is required to reduce rural poverty faster. To that end, this PhD study quantified rural livelihood dynamics emphasizing (i) the role of environmental resources use in helping rural households to escape poverty, (ii) development of a new approach...... households. Two groups of attrite households were identified: ‘movers’ (households that left their original location) and ‘non-movers’ (households that still resided in the same location but were not interviewed for different reasons). The findings revealed that (i) total environmental income had a limited...

  7. Hepatic Iron Quantification on 3 Tesla (3 T Magnetic Resonance (MR: Technical Challenges and Solutions

    Directory of Open Access Journals (Sweden)

    Muhammad Anwar

    2013-01-01

    Full Text Available MR has become a reliable and noninvasive method of hepatic iron quantification. Currently, most of the hepatic iron quantification is performed on 1.5 T MR, and the biopsy measurements have been paired with R2 and R2* values for 1.5 T MR. As the use of 3 T MR scanners is steadily increasing in clinical practice, it has become important to evaluate the practicality of calculating iron burden at 3 T MR. Hepatic iron quantification on 3 T MR requires a better understanding of the process and more stringent technical considerations. The purpose of this work is to focus on the technical challenges in establishing a relationship between T2* values at 1.5 T MR and 3 T MR for hepatic iron concentration (HIC and to develop an appropriately optimized MR protocol for the evaluation of T2* values in the liver at 3 T magnetic field strength. We studied 22 sickle cell patients using multiecho fast gradient-echo sequence (MFGRE 3 T MR and compared the results with serum ferritin and liver biopsy results. Our study showed that the quantification of hepatic iron on 3 T MRI in sickle cell disease patients correlates well with clinical blood test results and biopsy results. 3 T MR liver iron quantification based on MFGRE can be used for hepatic iron quantification in transfused patients.

  8. Rapid quantification of vesicle concentration for DOPG/DOPC and Cardiolipin/DOPC mixed lipid systems of variable composition.

    Science.gov (United States)

    Elmer-Dixon, Margaret M; Bowler, Bruce E

    2018-05-19

    A novel approach to quantify mixed lipid systems is described. Traditional approaches to lipid vesicle quantification are time consuming, require large amounts of material and are destructive. We extend our recently described method for quantification of pure lipid systems to mixed lipid systems. The method only requires a UV-Vis spectrometer and does not destroy sample. Mie scattering data from absorbance measurements are used as input into a Matlab program to calculate the total vesicle concentration and the concentrations of each lipid in the mixed lipid system. The technique is fast and accurate, which is essential for analytical lipid binding experiments. Copyright © 2018. Published by Elsevier Inc.

  9. [Requirements for a cross-location biobank IT infrastructure : Survey of stakeholder input on the establishment of a biobank network of the German Biobank Alliance (GBA)].

    Science.gov (United States)

    Schüttler, C; Buschhüter, N; Döllinger, C; Ebert, L; Hummel, M; Linde, J; Prokosch, H-U; Proynova, R; Lablans, M

    2018-04-24

    The large number of biobanks within Germany results in a high degree of heterogeneity with regard to the IT components used at the respective locations. Within the German Biobank Alliance (GBA), 13 biobanks implement harmonized processes for the provision of biomaterial and accompanying data. The networking of the individual biobanks and the associated harmonisation of the IT infrastructure should facilitate access to biomaterial and related clinical data. For this purpose, the relevant target groups were first identified in order to determine their requirements for IT solutions to be developed in a workshop. Of the seven identified interest groups, three were initially invited to a first round of discussions. The stakeholder input expressed resulted in a catalogue of requirements with regard to IT support for (i) a sample and data request, (ii) the handling of patient consent and inclusion, and (iii) the subsequent evaluation of the sample and data request. The next step is to design the IT solutions as prototypes based on these requirements. In parallel, further user groups are being surveyed in order to be able to further concretise the specifications for development.

  10. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  11. Uncertainty Quantification in Alchemical Free Energy Methods.

    Science.gov (United States)

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-05-02

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  12. Level 2 probabilistic event analyses and quantification

    International Nuclear Information System (INIS)

    Boneham, P.

    2003-01-01

    In this paper an example of quantification of a severe accident phenomenological event is given. The performed analysis for assessment of the probability that the debris released from the reactor vessel was in a coolable configuration in the lower drywell is presented. It is also analysed the assessment of the type of core/concrete attack that would occur. The coolability of the debris ex-vessel evaluation by an event in the Simplified Boiling Water Reactor (SBWR) Containment Event Tree (CET) and a detailed Decomposition Event Tree (DET) developed to aid in the quantification of this CET event are considered. The headings in the DET selected to represent plant physical states (e.g., reactor vessel pressure at the time of vessel failure) and the uncertainties associated with the occurrence of critical physical phenomena (e.g., debris configuration in the lower drywell) considered important to assessing whether the debris was coolable or not coolable ex-vessel are also discussed

  13. SPECT quantification of regional radionuclide distributions

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1986-01-01

    SPECT quantification of regional radionuclide activities within the human body is affected by several physical and instrumental factors including attenuation of photons within the patient, Compton scattered events, the system's finite spatial resolution and object size, finite number of detected events, partial volume effects, the radiopharmaceutical biokinetics, and patient and/or organ motion. Furthermore, other instrumentation factors such as calibration of the center-of-rotation, sampling, and detector nonuniformities will affect the SPECT measurement process. These factors are described, together with examples of compensation methods that are currently available for improving SPECT quantification. SPECT offers the potential to improve in vivo estimates of absorbed dose, provided the acquisition, reconstruction, and compensation procedures are adequately implemented and utilized. 53 references, 2 figures

  14. Capacitive immunosensor for C-reactive protein quantification

    KAUST Repository

    Sapsanis, Christos

    2015-08-02

    We report an agglutination-based immunosensor for the quantification of C-reactive protein (CRP). The developed immunoassay sensor requires approximately 15 minutes of assay time per sample and provides a sensitivity of 0.5 mg/L. We have measured the capacitance of interdigitated electrodes (IDEs) and quantified the concentration of added analyte. The proposed method is a label free detection method and hence provides rapid measurement preferable in diagnostics. We have so far been able to quantify the concentration to as low as 0.5 mg/L and as high as 10 mg/L. By quantifying CRP in serum, we can assess whether patients are prone to cardiac diseases and monitor the risk associated with such diseases. The sensor is a simple low cost structure and it can be a promising device for rapid and sensitive detection of disease markers at the point-of-care stage.

  15. Uncertainty quantification in computational fluid dynamics and aircraft engines

    CERN Document Server

    Montomoli, Francesco; D'Ammaro, Antonio; Massini, Michela; Salvadori, Simone

    2015-01-01

    This book introduces novel design techniques developed to increase the safety of aircraft engines. The authors demonstrate how the application of uncertainty methods can overcome problems in the accurate prediction of engine lift, caused by manufacturing error. This in turn ameliorates the difficulty of achieving required safety margins imposed by limits in current design and manufacturing methods. This text shows that even state-of-the-art computational fluid dynamics (CFD) are not able to predict the same performance measured in experiments; CFD methods assume idealised geometries but ideal geometries do not exist, cannot be manufactured and their performance differs from real-world ones. By applying geometrical variations of a few microns, the agreement with experiments improves dramatically, but unfortunately the manufacturing errors in engines or in experiments are unknown. In order to overcome this limitation, uncertainty quantification considers the probability density functions of manufacturing errors...

  16. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    Science.gov (United States)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  17. Quantification of Drosophila Grooming Behavior.

    Science.gov (United States)

    Barradale, Francesca; Sinha, Kairav; Lebestky, Tim

    2017-07-19

    Drosophila grooming behavior is a complex multi-step locomotor program that requires coordinated movement of both forelegs and hindlegs. Here we present a grooming assay protocol and novel chamber design that is cost-efficient and scalable for either small or large-scale studies of Drosophila grooming. Flies are dusted all over their body with Brilliant Yellow dye and given time to remove the dye from their bodies within the chamber. Flies are then deposited in a set volume of ethanol to solubilize the dye. The relative spectral absorbance of dye-ethanol samples for groomed versus ungroomed animals are measured and recorded. The protocol yields quantitative data of dye accumulation for individual flies, which can be easily averaged and compared across samples. This allows experimental designs to easily evaluate grooming ability for mutant animal studies or circuit manipulations. This efficient procedure is both versatile and scalable. We show work-flow of the protocol and comparative data between WT animals and mutant animals for the Drosophila type I Dopamine Receptor (DopR).

  18. Uncertainty quantification for hyperbolic and kinetic equations

    CERN Document Server

    Pareschi, Lorenzo

    2017-01-01

    This book explores recent advances in uncertainty quantification for hyperbolic, kinetic, and related problems. The contributions address a range of different aspects, including: polynomial chaos expansions, perturbation methods, multi-level Monte Carlo methods, importance sampling, and moment methods. The interest in these topics is rapidly growing, as their applications have now expanded to many areas in engineering, physics, biology and the social sciences. Accordingly, the book provides the scientific community with a topical overview of the latest research efforts.

  19. Quantification of heterogeneity observed in medical images

    OpenAIRE

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    Background There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging mod...

  20. Artifacts Quantification of Metal Implants in MRI

    Science.gov (United States)

    Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.

    2017-11-01

    The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.

  1. Networked inventory management systems: materializing supply chain management

    NARCIS (Netherlands)

    Verwijmeren, M.A.A.P.; Vlist, van der P.; Donselaar, van K.H.

    1996-01-01

    Aims to explain the driving forces for networked inventory management. Discusses major developments with respect to customer requirements, networked organizations and networked inventory management. Presents high level specifications of networked inventory management information systems (NIMISs).

  2. Optical CDMA components requirements

    Science.gov (United States)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  3. Learning OpenStack networking (Neutron)

    CERN Document Server

    Denton, James

    2014-01-01

    If you are an OpenStack-based cloud operator with experience in OpenStack Compute and nova-network but are new to Neutron networking, then this book is for you. Some networking experience is recommended, and a physical network infrastructure is required to provide connectivity to instances and other network resources configured in the book.

  4. Real-time quantitative PCR for retrovirus-like particle quantification in CHO cell culture.

    Science.gov (United States)

    de Wit, C; Fautz, C; Xu, Y

    2000-09-01

    Chinese hamster ovary (CHO) cells have been widely used to manufacture recombinant proteins intended for human therapeutic uses. Retrovirus-like particles, which are apparently defective and non-infectious, have been detected in all CHO cells by electron microscopy (EM). To assure viral safety of CHO cell-derived biologicals, quantification of retrovirus-like particles in production cell culture and demonstration of sufficient elimination of such retrovirus-like particles by the down-stream purification process are required for product market registration worldwide. EM, with a detection limit of 1x10(6) particles/ml, is the standard retrovirus-like particle quantification method. The whole process, which requires a large amount of sample (3-6 litres), is labour intensive, time consuming, expensive, and subject to significant assay variability. In this paper, a novel real-time quantitative PCR assay (TaqMan assay) has been developed for the quantification of retrovirus-like particles. Each retrovirus particle contains two copies of the viral genomic particle RNA (pRNA) molecule. Therefore, quantification of retrovirus particles can be achieved by quantifying the pRNA copy number, i.e. every two copies of retroviral pRNA is equivalent to one retrovirus-like particle. The TaqMan assay takes advantage of the 5'-->3' exonuclease activity of Taq DNA polymerase and utilizes the PRISM 7700 Sequence Detection System of PE Applied Biosystems (Foster City, CA, U.S.A.) for automated pRNA quantification through a dual-labelled fluorogenic probe. The TaqMan quantification technique is highly comparable to the EM analysis. In addition, it offers significant advantages over the EM analysis, such as a higher sensitivity of less than 600 particles/ml, greater accuracy and reliability, higher sample throughput, more flexibility and lower cost. Therefore, the TaqMan assay should be used as a substitute for EM analysis for retrovirus-like particle quantification in CHO cell

  5. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  6. Quantification of arbuscular mycorrhizal fungal DNA in roots: how important is material preservation?

    Science.gov (United States)

    Janoušková, Martina; Püschel, David; Hujslová, Martina; Slavíková, Renata; Jansa, Jan

    2015-04-01

    Monitoring populations of arbuscular mycorrhizal fungi (AMF) in roots is a pre-requisite for improving our understanding of AMF ecology and functioning of the symbiosis in natural conditions. Among other approaches, quantification of fungal DNA in plant tissues by quantitative real-time PCR is one of the advanced techniques with a great potential to process large numbers of samples and to deliver truly quantitative information. Its application potential would greatly increase if the samples could be preserved by drying, but little is currently known about the feasibility and reliability of fungal DNA quantification from dry plant material. We addressed this question by comparing quantification results based on dry root material to those obtained from deep-frozen roots of Medicago truncatula colonized with Rhizophagus sp. The fungal DNA was well conserved in the dry root samples with overall fungal DNA levels in the extracts comparable with those determined in extracts of frozen roots. There was, however, no correlation between the quantitative data sets obtained from the two types of material, and data from dry roots were more variable. Based on these results, we recommend dry material for qualitative screenings but advocate using frozen root materials if precise quantification of fungal DNA is required.

  7. A nuclear DNA-based species determination and DNA quantification assay for common poultry species.

    Science.gov (United States)

    Ng, J; Satkoski, J; Premasuthan, A; Kanthaswamy, S

    2014-12-01

    DNA testing for food authentication and quality control requires sensitive species-specific quantification of nuclear DNA from complex and unknown biological sources. We have developed a multiplex assay based on TaqMan® real-time quantitative PCR (qPCR) for species-specific detection and quantification of chicken (Gallus gallus), duck (Anas platyrhynchos), and turkey (Meleagris gallopavo) nuclear DNA. The multiplex assay is able to accurately detect very low quantities of species-specific DNA from single or multispecies sample mixtures; its minimum effective quantification range is 5 to 50 pg of starting DNA material. In addition to its use in food fraudulence cases, we have validated the assay using simulated forensic sample conditions to demonstrate its utility in forensic investigations. Despite treatment with potent inhibitors such as hematin and humic acid, and degradation of template DNA by DNase, the assay was still able to robustly detect and quantify DNA from each of the three poultry species in mixed samples. The efficient species determination and accurate DNA quantification will help reduce fraudulent food labeling and facilitate downstream DNA analysis for genetic identification and traceability.

  8. Planar imaging quantification using 3D attenuation correction data and Monte Carlo simulated buildup factors

    International Nuclear Information System (INIS)

    Miller, C.; Filipow, L.; Jackson, S.; Riauka, T.

    1996-01-01

    A new method to correct for attenuation and the buildup of scatter in planar imaging quantification is presented. The method is based on the combined use of 3D density information provided by computed tomography to correct for attenuation and the application of Monte Carlo simulated buildup factors to correct for buildup in the projection pixels. CT and nuclear medicine images were obtained for a purpose-built nonhomogeneous phantom that models the human anatomy in the thoracic and abdominal regions. The CT transverse slices of the phantom were converted to a set of consecutive density maps. An algorithm was developed that projects the 3D information contained in the set of density maps to create opposing pairs of accurate 2D correction maps that were subsequently applied to planar images acquired from a dual-head gamma camera. A comparison of results obtained by the new method and the geometric mean approach based on published techniques is presented for some of the source arrangements used. Excellent results were obtained for various source - phantom configurations used to evaluate the method. Activity quantification of a line source at most locations in the nonhomogeneous phantom produced errors of less than 2%. Additionally, knowledge of the actual source depth is not required for accurate activity quantification. Quantification of volume sources placed in foam, Perspex and aluminium produced errors of less than 7% for the abdominal and thoracic configurations of the phantom. (author)

  9. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  10. Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD

    Directory of Open Access Journals (Sweden)

    Sabine Henders

    2012-01-01

    Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.

  11. Wireless networked music performance

    CERN Document Server

    Gabrielli, Leonardo

    2016-01-01

    This book presents a comprehensive overview of the state of the art in Networked Music Performance (NMP) and a historical survey of computer music networking. It introduces current technical trends in NMP and technical issues yet to be addressed. It also lists wireless communication protocols and compares these to the requirements of NMP. Practical use cases and advancements are also discussed.

  12. Fully automatic detection and segmentation of abdominal aortic thrombus in post-operative CTA images using Deep Convolutional Neural Networks.

    Science.gov (United States)

    López-Linares, Karen; Aranjuelo, Nerea; Kabongo, Luis; Maclair, Gregory; Lete, Nerea; Ceresa, Mario; García-Familiar, Ainhoa; Macía, Iván; González Ballester, Miguel A

    2018-05-01

    Computerized Tomography Angiography (CTA) based follow-up of Abdominal Aortic Aneurysms (AAA) treated with Endovascular Aneurysm Repair (EVAR) is essential to evaluate the progress of the patient and detect complications. In this context, accurate quantification of post-operative thrombus volume is required. However, a proper evaluation is hindered by the lack of automatic, robust and reproducible thrombus segmentation algorithms. We propose a new fully automatic approach based on Deep Convolutional Neural Networks (DCNN) for robust and reproducible thrombus region of interest detection and subsequent fine thrombus segmentation. The DetecNet detection network is adapted to perform region of interest extraction from a complete CTA and a new segmentation network architecture, based on Fully Convolutional Networks and a Holistically-Nested Edge Detection Network, is presented. These networks are trained, validated and tested in 13 post-operative CTA volumes of different patients using a 4-fold cross-validation approach to provide more robustness to the results. Our pipeline achieves a Dice score of more than 82% for post-operative thrombus segmentation and provides a mean relative volume difference between ground truth and automatic segmentation that lays within the experienced human observer variance without the need of human intervention in most common cases. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Declarative Networking

    CERN Document Server

    Loo, Boon Thau

    2012-01-01

    Declarative Networking is a programming methodology that enables developers to concisely specify network protocols and services, which are directly compiled to a dataflow framework that executes the specifications. Declarative networking proposes the use of a declarative query language for specifying and implementing network protocols, and employs a dataflow framework at runtime for communication and maintenance of network state. The primary goal of declarative networking is to greatly simplify the process of specifying, implementing, deploying and evolving a network design. In addition, decla

  14. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  15. Quantification of organ motion during chemoradiotherapy of rectal cancer using cone-beam computed tomography.

    LENUS (Irish Health Repository)

    Chong, Irene

    2011-11-15

    There has been no previously published data related to the quantification of rectal motion using cone-beam computed tomography (CBCT) during standard conformal long-course chemoradiotherapy. The purpose of the present study was to quantify the interfractional changes in rectal movement and dimensions and rectal and bladder volume using CBCT and to quantify the bony anatomy displacements to calculate the margins required to account for systematic (Σ) and random (σ) setup errors.

  16. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J.M.

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  17. Characterization of 3D PET systems for accurate quantification of myocardial blood flow

    OpenAIRE

    Renaud, Jennifer M.; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Éric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C.; Turkington, Timothy G

    2016-01-01

    Three-dimensional (3D) mode imaging is the current standard for positron emission tomography-computed tomography (PET-CT) systems. Dynamic imaging for quantification of myocardial blood flow (MBF) with short-lived tracers, such as Rb-82- chloride (Rb-82), requires accuracy to be maintained over a wide range of isotope activities and scanner count-rates. We propose new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative...

  18. Requirements management: A CSR's perspective

    Science.gov (United States)

    Thompson, Joanie

    1991-01-01

    The following subject areas are covered: customer service overview of network service request processing; Customer Service Representative (CSR) responsibility matrix; extract from a sample Memorandum of Understanding; Network Service Request Form and its instructions sample notification of receipt; and requirements management in the NASA Science Internet.

  19. OpenFlow Switching Performance using Network Simulator - 3

    OpenAIRE

    Sriram Prashanth, Naguru

    2016-01-01

    Context. In the present network inventive world, there is a quick expansion of switches and protocols, which are used to cope up with the increase in customer requirement in the networking. With increasing demand for higher bandwidths and lower latency and to meet these requirements new network paths are introduced. To reduce network load in present switching network, development of new innovative switching is required. These required results can be achieved by Software Define Network or Trad...

  20. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    Science.gov (United States)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  1. Real-time ligation chain reaction for DNA quantification and identification on the FO-SPR.

    Science.gov (United States)

    Knez, Karel; Spasic, Dragana; Delport, Filip; Lammertyn, Jeroen

    2015-05-15

    Different assays have been developed in the past years to meet point-of-care diagnostic tests requirements for fast and sensitive quantification and identification of targets. In this paper, we developed the ligation chain reaction (LCR) assay on the Fiber Optic Surface Plasmon Resonance (FO-SPR) platform, which enabled simultaneous quantification and cycle-to-cycle identification of DNA during amplification. The newly developed assay incorporated FO-SPR DNA melting assay, previously developed by our group. This required establishment of several assay parameters, including buffer ionic strength and thermal ramping speed as these parameters both influence the ligation enzyme performance and the hybridization yield of the gold nanoparticles (Au NPs) on the FO-SPR sensor. Quantification and identification of DNA targets was achieved over a wide concentration range with a calibration curve spanning 7 orders of magnitude and LOD of 13.75 fM. Moreover, the FO-SPR LCR assay could discriminate single nucleotide polymorphism (SNPs) without any post reaction analysis, featuring thus all the essential requirements of POC tests. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  3. Digital PCR for direct quantification of viruses without DNA extraction.

    Science.gov (United States)

    Pavšič, Jernej; Žel, Jana; Milavec, Mojca

    2016-01-01

    DNA extraction before amplification is considered an essential step for quantification of viral DNA using real-time PCR (qPCR). However, this can directly affect the final measurements due to variable DNA yields and removal of inhibitors, which leads to increased inter-laboratory variability of qPCR measurements and reduced agreement on viral loads. Digital PCR (dPCR) might be an advantageous methodology for the measurement of virus concentrations, as it does not depend on any calibration material and it has higher tolerance to inhibitors. DNA quantification without an extraction step (i.e. direct quantification) was performed here using dPCR and two different human cytomegalovirus whole-virus materials. Two dPCR platforms were used for this direct quantification of the viral DNA, and these were compared with quantification of the extracted viral DNA in terms of yield and variability. Direct quantification of both whole-virus materials present in simple matrices like cell lysate or Tris-HCl buffer provided repeatable measurements of virus concentrations that were probably in closer agreement with the actual viral load than when estimated through quantification of the extracted DNA. Direct dPCR quantification of other viruses, reference materials and clinically relevant matrices is now needed to show the full versatility of this very promising and cost-efficient development in virus quantification.

  4. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against...... human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...

  5. Quantification procedures in micro X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Kanngiesser, Birgit

    2003-01-01

    For the quantification in micro X-ray fluorescence analysis standardfree quantification procedures have become especially important. An introduction to the basic concepts of these quantification procedures is given, followed by a short survey of the procedures which are available now and what kind of experimental situations and analytical problems are addressed. The last point is extended by the description of an own development for the fundamental parameter method, which renders the inclusion of nonparallel beam geometries possible. Finally, open problems for the quantification procedures are discussed

  6. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  7. Operational risk quantification and modelling within Romanian insurance industry

    Directory of Open Access Journals (Sweden)

    Tudor Răzvan

    2017-07-01

    Full Text Available This paper aims at covering and describing the shortcomings of various models used to quantify and model the operational risk within insurance industry with a particular focus on Romanian specific regulation: Norm 6/2015 concerning the operational risk issued by IT systems. While most of the local insurers are focusing on implementing the standard model to compute the Operational Risk solvency capital required, the local regulator has issued a local norm that requires to identify and assess the IT based operational risks from an ISO 27001 perspective. The challenges raised by the correlations assumed in the Standard model are substantially increased by this new regulation that requires only the identification and quantification of the IT operational risks. The solvency capital requirement stipulated by the implementation of Solvency II doesn’t recommend a model or formula on how to integrate the newly identified risks in the Operational Risk capital requirements. In this context we are going to assess the academic and practitioner’s understanding in what concerns: The Frequency-Severity approach, Bayesian estimation techniques, Scenario Analysis and Risk Accounting based on risk units, and how they could support the modelling of operational risk that are IT based. Developing an internal model only for the operational risk capital requirement proved to be, so far, costly and not necessarily beneficial for the local insurers. As the IT component will play a key role in the future of the insurance industry, the result of this analysis will provide a specific approach in operational risk modelling that can be implemented in the context of Solvency II, in a particular situation when (internal or external operational risk databases are scarce or not available.

  8. Quantification of rutile in anatase by X-ray diffraction

    International Nuclear Information System (INIS)

    Chavez R, A.

    2001-01-01

    Nowadays the discovering of new and better materials required in all areas of the industry has been lead to the human being to introduce him to this small and great world. The crystalline materials, have properties markedly directional. When it is necessary to realize a quantitative analysis to these materials the task is not easy. The main objective of this work is the research of a real problem, its solution and perfecting of a technique involving the theoretical and experimental principles which allow the quantification of crystalline phases. The chapter 1 treats about the study of crystalline state during the last century, by means of the X-ray diffraction technique. The chapter 2 studies the nature and production of X-rays, the chapter 3 expounds the principles of the diffraction technique which to carry out when it is satisfied the Bragg law studying the powder diffraction method and its applications. In the chapter 4 it is explained how the intensities of the beams diffracted are determined by the atoms positions inside of the elemental cell of the crystal. The properties of the crystalline samples of anatase and rutile are described in the chapter 5. The results of this last analysis are the information which will be processed by means of the auxiliary software: Diffrac AT, Axum and Peakfit as well as the TAFOR and CUANTI software describing this part with more detail in the chapters 6 and 7 where it is mentioned step by step the function of each software until to reach the quantification of crystalline phases, objective of this work. Finally, in the chapter 8 there are a results analysis and conclusions. The contribution of this work is for those learned institutions of limited resources which can tackle in this way the characterization of materials. (Author)

  9. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  10. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  12. Quantification of thermal damage in skin tissue

    Institute of Scientific and Technical Information of China (English)

    Xu Feng; Wen Ting; Lu Tianjian; Seffen Keith

    2008-01-01

    Skin thermal damage or skin burns are the most commonly encountered type of trauma in civilian and military communities. Besides, advances in laser, microwave and similar technologies have led to recent developments of thermal treatments for disease and damage involving skin tissue, where the objective is to induce thermal damage precisely within targeted tissue structures but without affecting the surrounding, healthy tissue. Further, extended pain sensation induced by thermal damage has also brought great problem for burn patients. Thus, it is of great importance to quantify the thermal damage in skin tissue. In this paper, the available models and experimental methods for quantification of thermal damage in skin tissue are discussed.

  13. Uncertainty quantification for PZT bimorph actuators

    Science.gov (United States)

    Bravo, Nikolas; Smith, Ralph C.; Crews, John

    2018-03-01

    In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.

  14. Linking probe thermodynamics to microarray quantification

    International Nuclear Information System (INIS)

    Li, Shuzhao; Pozhitkov, Alexander; Brouwer, Marius

    2010-01-01

    Understanding the difference in probe properties holds the key to absolute quantification of DNA microarrays. So far, Langmuir-like models have failed to link sequence-specific properties to hybridization signals in the presence of a complex hybridization background. Data from washing experiments indicate that the post-hybridization washing has no major effect on the specifically bound targets, which give the final signals. Thus, the amount of specific targets bound to probes is likely determined before washing, by the competition against nonspecific binding. Our competitive hybridization model is a viable alternative to Langmuir-like models. (comment)

  15. Image cytometry: nuclear and chromosomal DNA quantification.

    Science.gov (United States)

    Carvalho, Carlos Roberto; Clarindo, Wellington Ronildo; Abreu, Isabella Santiago

    2011-01-01

    Image cytometry (ICM) associates microscopy, digital image and software technologies, and has been particularly useful in spatial and densitometric cytological analyses, such as DNA ploidy and DNA content measurements. Basically, ICM integrates methodologies of optical microscopy calibration, standard density filters, digital CCD camera, and image analysis softwares for quantitative applications. Apart from all system calibration and setup, cytological protocols must provide good slide preparations for efficient and reliable ICM analysis. In this chapter, procedures for ICM applications employed in our laboratory are described. Protocols shown here for human DNA ploidy determination and quantification of nuclear and chromosomal DNA content in plants could be used as described, or adapted for other studies.

  16. Adjoint-Based Uncertainty Quantification with MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  17. Uncertainty quantification and stochastic modeling with Matlab

    CERN Document Server

    Souza de Cursi, Eduardo

    2015-01-01

    Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no

  18. Reading and comparative quantification of perfusion myocardium tomo-scintigraphy realised by gamma camera and semiconductors camera

    International Nuclear Information System (INIS)

    Merlin, C.; Gauthe, M.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F.; Motreff, P.

    2010-01-01

    By offering high quality images, semiconductor cameras represent an undeniable technological progress. The interpretation of examinations, however, requires a learning phase. The optimization of quantification software should confirm the superiority of the D-SPECT for the measurement of kinetic parameters. (N.C.)

  19. Reverse transcriptase real-time PCR for detection and quantification of viable Campylobacter jejuni directly from poultry faecal samples

    DEFF Research Database (Denmark)

    Bui, Thanh Xuan; Wolff, Anders; Madsen, Mogens

    2012-01-01

    Campylobacter spp. is the most common cause of bacterial diarrhoea in humans worldwide. Therefore, rapid and reliable methods fordetection and quantification of this pathogen are required. In this study, we have developed a reverse transcription quantitative real-time PCR(RT-qPCR) for detection a...

  20. Parsing and Quantification of Raw Orbitrap Mass Spectrometer Data Using RawQuant.

    Science.gov (United States)

    Kovalchik, Kevin A; Moggridge, Sophie; Chen, David D Y; Morin, Gregg B; Hughes, Christopher S

    2018-06-01

    Effective analysis of protein samples by mass spectrometry (MS) requires careful selection and optimization of a range of experimental parameters. As the output from the primary detection device, the "raw" MS data file can be used to gauge the success of a given sample analysis. However, the closed-source nature of the standard raw MS file can complicate effective parsing of the data contained within. To ease and increase the range of analyses possible, the RawQuant tool was developed to enable parsing of raw MS files derived from Thermo Orbitrap instruments to yield meta and scan data in an openly readable text format. RawQuant can be commanded to export user-friendly files containing MS 1 , MS 2 , and MS 3 metadata as well as matrices of quantification values based on isobaric tagging approaches. In this study, the utility of RawQuant is demonstrated in several scenarios: (1) reanalysis of shotgun proteomics data for the identification of the human proteome, (2) reanalysis of experiments utilizing isobaric tagging for whole-proteome quantification, and (3) analysis of a novel bacterial proteome and synthetic peptide mixture for assessing quantification accuracy when using isobaric tags. Together, these analyses successfully demonstrate RawQuant for the efficient parsing and quantification of data from raw Thermo Orbitrap MS files acquired in a range of common proteomics experiments. In addition, the individual analyses using RawQuant highlights parametric considerations in the different experimental sets and suggests targetable areas to improve depth of coverage in identification-focused studies and quantification accuracy when using isobaric tags.

  1. Design and value of service oriented technologies for smart business networking

    NARCIS (Netherlands)

    Alt, R.; Smits, M.T.; Beverungen, D.; Tuunanen, T.; Wijnhoven, F.

    2014-01-01

    Business networks that effectively use technologies and outperform competing networks are known as smart business networks. Theory hypothesizes that smart business networking requires a ‘Networked Business Operating System’ (NBOS), a technological architecture consisting of business logic, that

  2. Underage Children and Social Networking

    Science.gov (United States)

    Weeden, Shalynn; Cooke, Bethany; McVey, Michael

    2013-01-01

    Despite minimum age requirements for joining popular social networking services such as Facebook, many students misrepresent their real ages and join as active participants in the networks. This descriptive study examines the use of social networking services (SNSs) by children under the age of 13. The researchers surveyed a sample of 199…

  3. Lowering the quantification limit of the QubitTM RNA HS assay using RNA spike-in.

    Science.gov (United States)

    Li, Xin; Ben-Dov, Iddo Z; Mauro, Maurizio; Williams, Zev

    2015-05-06

    RNA quantification is often a prerequisite for most RNA analyses such as RNA sequencing. However, the relatively low sensitivity and large sample consumption of traditional RNA quantification methods such as UV spectrophotometry and even the much more sensitive fluorescence-based RNA quantification assays, such as the Qubit™ RNA HS Assay, are often inadequate for measuring minute levels of RNA isolated from limited cell and tissue samples and biofluids. Thus, there is a pressing need for a more sensitive method to reliably and robustly detect trace levels of RNA without interference from DNA. To improve the quantification limit of the Qubit™ RNA HS Assay, we spiked-in a known quantity of RNA to achieve the minimum reading required by the assay. Samples containing trace amounts of RNA were then added to the spike-in and measured as a reading increase over RNA spike-in baseline. We determined the accuracy and precision of reading increases between 1 and 20 pg/μL as well as RNA-specificity in this range, and compared to those of RiboGreen(®), another sensitive fluorescence-based RNA quantification assay. We then applied Qubit™ Assay with RNA spike-in to quantify plasma RNA samples. RNA spike-in improved the quantification limit of the Qubit™ RNA HS Assay 5-fold, from 25 pg/μL down to 5 pg/μL while maintaining high specificity to RNA. This enabled quantification of RNA with original concentration as low as 55.6 pg/μL compared to 250 pg/μL for the standard assay and decreased sample consumption from 5 to 1 ng. Plasma RNA samples that were not measurable by the Qubit™ RNA HS Assay were measurable by our modified method. The Qubit™ RNA HS Assay with RNA spike-in is able to quantify RNA with high specificity at 5-fold lower concentration and uses 5-fold less sample quantity than the standard Qubit™ Assay.

  4. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Science.gov (United States)

    Rutledge, Robert G

    2011-03-02

    Linear regression of efficiency (LRE) introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  5. A Java program for LRE-based real-time qPCR that enables large-scale absolute quantification.

    Directory of Open Access Journals (Sweden)

    Robert G Rutledge

    Full Text Available BACKGROUND: Linear regression of efficiency (LRE introduced a new paradigm for real-time qPCR that enables large-scale absolute quantification by eliminating the need for standard curves. Developed through the application of sigmoidal mathematics to SYBR Green I-based assays, target quantity is derived directly from fluorescence readings within the central region of an amplification profile. However, a major challenge of implementing LRE quantification is the labor intensive nature of the analysis. FINDINGS: Utilizing the extensive resources that are available for developing Java-based software, the LRE Analyzer was written using the NetBeans IDE, and is built on top of the modular architecture and windowing system provided by the NetBeans Platform. This fully featured desktop application determines the number of target molecules within a sample with little or no intervention by the user, in addition to providing extensive database capabilities. MS Excel is used to import data, allowing LRE quantification to be conducted with any real-time PCR instrument that provides access to the raw fluorescence readings. An extensive help set also provides an in-depth introduction to LRE, in addition to guidelines on how to implement LRE quantification. CONCLUSIONS: The LRE Analyzer provides the automated analysis and data storage capabilities required by large-scale qPCR projects wanting to exploit the many advantages of absolute quantification. Foremost is the universal perspective afforded by absolute quantification, which among other attributes, provides the ability to directly compare quantitative data produced by different assays and/or instruments. Furthermore, absolute quantification has important implications for gene expression profiling in that it provides the foundation for comparing transcript quantities produced by any gene with any other gene, within and between samples.

  6. Surface Enhanced Raman Spectroscopy (SERS) methods for endpoint and real-time quantification of miRNA assays

    Science.gov (United States)

    Restaino, Stephen M.; White, Ian M.

    2017-03-01

    Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.

  7. Bio-inspired networking

    CERN Document Server

    Câmara, Daniel

    2015-01-01

    Bio-inspired techniques are based on principles, or models, of biological systems. In general, natural systems present remarkable capabilities of resilience and adaptability. In this book, we explore how bio-inspired methods can solve different problems linked to computer networks. Future networks are expected to be autonomous, scalable and adaptive. During millions of years of evolution, nature has developed a number of different systems that present these and other characteristics required for the next generation networks. Indeed, a series of bio-inspired methods have been successfully used to solve the most diverse problems linked to computer networks. This book presents some of these techniques from a theoretical and practical point of view. Discusses the key concepts of bio-inspired networking to aid you in finding efficient networking solutions Delivers examples of techniques both in theoretical concepts and practical applications Helps you apply nature's dynamic resource and task management to your co...

  8. Network cohesion

    OpenAIRE

    Cavalcanti, Tiago Vanderlei; Giannitsarou, Chrysi; Johnson, CR

    2017-01-01

    We define a measure of network cohesion and show how it arises naturally in a broad class of dynamic models of endogenous perpetual growth with network externalities. Via a standard growth model, we show why network cohesion is crucial for conditional convergence and explain that as cohesion increases, convergence is faster. We prove properties of network cohesion and define a network aggregator that preserves network cohesion.

  9. Quantification of complex modular architecture in plants.

    Science.gov (United States)

    Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain

    2018-04-01

    Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  10. Quantification of prebiotics in commercial infant formulas.

    Science.gov (United States)

    Sabater, Carlos; Prodanov, Marin; Olano, Agustín; Corzo, Nieves; Montilla, Antonia

    2016-03-01

    Since breastfeeding is not always possible, infant formulas (IFs) are supplemented with prebiotic oligosaccharides, such as galactooligosaccharides (GOS) and/or fructooligosaccharides (FOS) to exert similar effects to those of the breast milk. Nowadays, a great number of infant formulas enriched with prebiotics are disposal in the market, however there are scarce data about their composition. In this study, the combined use of two chromatographic methods (GC-FID and HPLC-RID) for the quantification of carbohydrates present in commercial infant formulas have been used. According to the results obtained by GC-FID for products containing prebiotics, the content of FOS, GOS and GOS/FOS was in the ranges of 1.6-5.0, 1.7-3.2, and 0.08-0.25/2.3-3.8g/100g of product, respectively. HPLC-RID analysis allowed quantification of maltodextrins with degree of polymerization (DP) up to 19. The methodology proposed here may be used for routine quality control of infant formula and other food ingredients containing prebiotics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Seed shape quantification in the order Cucurbitales

    Directory of Open Access Journals (Sweden)

    Emilio Cervantes

    2018-02-01

    Full Text Available Seed shape quantification in diverse species of the families belonging to the order Cucurbitales is done based on the comparison of seed images with geometric figures. Quantification of seed shape is a useful tool in plant description for phenotypic characterization and taxonomic analysis. J index gives the percent of similarity of the image of a seed with a geometric figure and it is useful in taxonomy for the study of relationships between plant groups. Geometric figures used as models in the Cucurbitales are the ovoid, two ellipses with different x/y ratios and the outline of the Fibonacci spiral. The images of seeds have been compared with these figures and values of J index obtained. The results obtained for 29 species in the family Cucurbitaceae support a relationship between seed shape and species ecology. Simple seed shape, with images resembling simple geometric figures like the ovoid, ellipse or the Fibonacci spiral, may be a feature in the basal clades of taxonomic groups.

  12. Virus detection and quantification using electrical parameters

    Science.gov (United States)

    Ahmad, Mahmoud Al; Mustafa, Farah; Ali, Lizna M.; Rizvi, Tahir A.

    2014-10-01

    Here we identify and quantitate two similar viruses, human and feline immunodeficiency viruses (HIV and FIV), suspended in a liquid medium without labeling, using a semiconductor technique. The virus count was estimated by calculating the impurities inside a defined volume by observing the change in electrical parameters. Empirically, the virus count was similar to the absolute value of the ratio of the change of the virus suspension dopant concentration relative to the mock dopant over the change in virus suspension Debye volume relative to mock Debye volume. The virus type was identified by constructing a concentration-mobility relationship which is unique for each kind of virus, allowing for a fast (within minutes) and label-free virus quantification and identification. For validation, the HIV and FIV virus preparations were further quantified by a biochemical technique and the results obtained by both approaches corroborated well. We further demonstrate that the electrical technique could be applied to accurately measure and characterize silica nanoparticles that resemble the virus particles in size. Based on these results, we anticipate our present approach to be a starting point towards establishing the foundation for label-free electrical-based identification and quantification of an unlimited number of viruses and other nano-sized particles.

  13. CT quantification of central airway in tracheobronchomalacia

    Energy Technology Data Exchange (ETDEWEB)

    Im, Won Hyeong; Jin, Gong Yong; Han, Young Min; Kim, Eun Young [Dept. of Radiology, Chonbuk National University Hospital, Jeonju (Korea, Republic of)

    2016-05-15

    To know which factors help to diagnose tracheobronchomalacia (TBM) using CT quantification of central airway. From April 2013 to July 2014, 19 patients (68.0 ± 15.0 years; 6 male, 13 female) were diagnosed as TBM on CT. As case-matching, 38 normal subjects (65.5 ± 21.5 years; 6 male, 13 female) were selected. All 57 subjects underwent CT with end-inspiration and end-expiration. Airway parameters of trachea and both main bronchus were assessed using software (VIDA diagnostic). Airway parameters of TBM patients and normal subjects were compared using the Student t-test. In expiration, both wall perimeter and wall thickness in TBM patients were significantly smaller than normal subjects (wall perimeter: trachea, 43.97 mm vs. 49.04 mm, p = 0.020; right main bronchus, 33.52 mm vs. 42.69 mm, p < 0.001; left main bronchus, 26.76 mm vs. 31.88 mm, p = 0.012; wall thickness: trachea, 1.89 mm vs. 2.22 mm, p = 0.017; right main bronchus, 1.64 mm vs. 1.83 mm, p = 0.021; left main bronchus, 1.61 mm vs. 1.75 mm, p = 0.016). Wall thinning and decreased perimeter of central airway of expiration by CT quantification would be a new diagnostic indicators in TBM.

  14. Uncertainty Quantification Bayesian Framework for Porous Media Flows

    Science.gov (United States)

    Demyanov, V.; Christie, M.; Erbas, D.

    2005-12-01

    Uncertainty quantification is an increasingly important aspect of many areas of applied science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of fluids through undersurface reservoirs is an example of a complex system where accuracy in prediction is needed (e.g. in oil industry it is essential for financial reasons). Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks, which is a highly computationally expensive task. This work examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed time series data. The framework is flexible for a wide range of general physical/statistical parametric models, which are used to describe the underlying hydro-geological process in its temporal dynamics. The approach is based on exploration of the parameter space and update of the prior beliefs about what the most likely model definitions are. Optimization problem for a highly parametric physical model usually have multiple solutions, which impact the uncertainty of the made predictions. Stochastic search algorithm (e.g. genetic algorithm) allows to identify multiple "good enough" models in the parameter space. Furthermore, inference of the generated model ensemble via MCMC based algorithm evaluates the posterior probability of the generated models and quantifies uncertainty of the predictions. Machine learning algorithm - Artificial Neural Networks - are used to speed up the identification of regions in parameter space where good matches to observed data can be found. Adaptive nature of ANN allows to develop different ways of integrating them into the Bayesian framework: as direct time

  15. The network researchers' network

    DEFF Research Database (Denmark)

    Henneberg, Stephan C.; Jiang, Zhizhong; Naudé, Peter

    2009-01-01

    The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987). In thi......The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987......). In this paper, based upon the papers presented at the 22 conferences held to date, we undertake a Social Network Analysis in order to examine the degree of co-publishing that has taken place between this group of researchers. We identify the different components in this database, and examine the large main...

  16. MPLS for metropolitan area networks

    CERN Document Server

    Tan, Nam-Kee

    2004-01-01

    METROPOLITAN AREA NETWORKS AND MPLSRequirements of Metropolitan Area Network ServicesMetropolitan Area Network OverviewThe Bandwidth DemandThe Metro Service Provider's Business ApproachesThe Emerging Metro Customer Expectations and NeedsSome Prevailing Metro Service OpportunitiesService Aspects and RequirementsRoles of MPLS in Metropolitan Area NetworksMPLS PrimerMPLS ApplicationsTRAFFIC ENGINEERING ASPECTS OF METROPOLITAN AREA NETWORKSTraffic Engineering ConceptsNetwork CongestionHyper Aggregation ProblemEasing CongestionNetwork ControlTactical versus Strategic Traffic EngineeringIP/ATM Overl

  17. Data-independent MS/MS quantification of neuropeptides for determination of putative feeding-related neurohormones in microdialysate.

    Science.gov (United States)

    Schmerberg, Claire M; Liang, Zhidan; Li, Lingjun

    2015-01-21

    Food consumption is an important behavior that is regulated by an intricate array of neuropeptides (NPs). Although many feeding-related NPs have been identified in mammals, precise mechanisms are unclear and difficult to study in mammals, as current methods are not highly multiplexed and require extensive a priori knowledge about analytes. New advances in data-independent acquisition (DIA) MS/MS and the open-source quantification software Skyline have opened up the possibility to identify hundreds of compounds and quantify them from a single DIA MS/MS run. An untargeted DIA MS(E) quantification method using Skyline software for multiplexed, discovery-driven quantification was developed and found to produce linear calibration curves for peptides at physiologically relevant concentrations using a protein digest as internal standard. By using this method, preliminary relative quantification of the crab Cancer borealis neuropeptidome (winnowing candidate NPs related to a behavior of interest in a functionally relevant manner, and demonstrates the success of such a UPLC-MS(E) quantification method using the open source software Skyline.

  18. Evaluation of Network Failure induced IPTV degradation in Metro Networks

    DEFF Research Database (Denmark)

    Wessing, Henrik; Berger, Michael Stübert; Yu, Hao

    2009-01-01

    In this paper, we evaluate future network services and classify them according to their network requirements. IPTV is used as candidate service to evaluate the performance of Carrier Ethernet OAM update mechanisms and requirements. The latter is done through quality measurements using MDI...

  19. Protection of electricity distribution networks

    CERN Document Server

    Gers, Juan M

    2004-01-01

    Written by two practicing electrical engineers, this second edition of the bestselling Protection of Electricity Distribution Networks offers both practical and theoretical coverage of the technologies, from the classical electromechanical relays to the new numerical types, which protect equipment on networks and in electrical plants. A properly coordinated protection system is vital to ensure that an electricity distribution network can operate within preset requirements for safety for individual items of equipment, staff and public, and the network overall. Suitable and reliable equipment sh

  20. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  1. Organization of complex networks

    Science.gov (United States)

    Kitsak, Maksim

    Many large complex systems can be successfully analyzed using the language of graphs and networks. Interactions between the objects in a network are treated as links connecting nodes. This approach to understanding the structure of networks is an important step toward understanding the way corresponding complex systems function. Using the tools of statistical physics, we analyze the structure of networks as they are found in complex systems such as the Internet, the World Wide Web, and numerous industrial and social networks. In the first chapter we apply the concept of self-similarity to the study of transport properties in complex networks. Self-similar or fractal networks, unlike non-fractal networks, exhibit similarity on a range of scales. We find that these fractal networks have transport properties that differ from those of non-fractal networks. In non-fractal networks, transport flows primarily through the hubs. In fractal networks, the self-similar structure requires any transport to also flow through nodes that have only a few connections. We also study, in models and in real networks, the crossover from fractal to non-fractal networks that occurs when a small number of random interactions are added by means of scaling techniques. In the second chapter we use k-core techniques to study dynamic processes in networks. The k-core of a network is the network's largest component that, within itself, exhibits all nodes with at least k connections. We use this k-core analysis to estimate the relative leadership positions of firms in the Life Science (LS) and Information and Communication Technology (ICT) sectors of industry. We study the differences in the k-core structure between the LS and the ICT sectors. We find that the lead segment (highest k-core) of the LS sector, unlike that of the ICT sector, is remarkably stable over time: once a particular firm enters the lead segment, it is likely to remain there for many years. In the third chapter we study how

  2. Molecular quantification of genes encoding for green-fluorescent proteins

    DEFF Research Database (Denmark)

    Felske, A; Vandieken, V; Pauling, B V

    2003-01-01

    A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification...... PCR strategy is a highly specific and sensitive way to monitor recombinant DNA in environments like the efflux of a biotechnological plant....

  3. La quantification en Kabiye: une approche linguistique | Pali ...

    African Journals Online (AJOL)

    ... which is denoted by lexical quantifiers. Quantification with specific reference is provided by different types of linguistic units (nouns, numerals, adjectives, adverbs, ideophones and verbs) in arguments/noun phrases and in the predicative phrase in the sense of Chomsky. Keywords: quantification, class, number, reference, ...

  4. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    Science.gov (United States)

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  5. Network cosmology.

    Science.gov (United States)

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.

  6. Exploitation of immunofluorescence for the quantification and characterization of small numbers of Pasteuria endospores.

    Science.gov (United States)

    Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G

    2006-12-01

    The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.

  7. A method to quantify infectious airborne pathogens at concentrations below the threshold of quantification by culture

    Science.gov (United States)

    Cutler, Timothy D.; Wang, Chong; Hoff, Steven J.; Zimmerman, Jeffrey J.

    2013-01-01

    In aerobiology, dose-response studies are used to estimate the risk of infection to a susceptible host presented by exposure to a specific dose of an airborne pathogen. In the research setting, host- and pathogen-specific factors that affect the dose-response continuum can be accounted for by experimental design, but the requirement to precisely determine the dose of infectious pathogen to which the host was exposed is often challenging. By definition, quantification of viable airborne pathogens is based on the culture of micro-organisms, but some airborne pathogens are transmissible at concentrations below the threshold of quantification by culture. In this paper we present an approach to the calculation of exposure dose at microbiologically unquantifiable levels using an application of the “continuous-stirred tank reactor (CSTR) model” and the validation of this approach using rhodamine B dye as a surrogate for aerosolized microbial pathogens in a dynamic aerosol toroid (DAT). PMID:24082399

  8. Preliminary Magnitude of Completeness Quantification of Improved BMKG Catalog (2008-2016) in Indonesian Region

    Science.gov (United States)

    Diantari, H. C.; Suryanto, W.; Anggraini, A.; Irnaka, T. M.; Susilanto, P.; Ngadmanto, D.

    2018-03-01

    We present a magnitude of completeness (Mc) quantification based on BMKG improved earthquake catalog which generated from Ina-TEWS seismograph network. The Mc quantification can help us determine the lowest magnitude which can be recorded perfectly as a function of space and time. We use the BMKG improved earthquake catalog from 2008 to 2016 which has been converted to moment magnitude (Mw) and declustered. The value of Mc is computed by determining the initial point of deviation patterns in Frequency Magnitude Distribution (FMD) chart following the Gutenberg-Richter equations. In the next step, we calculate the temporal variation of Mc and b-value using maximum likelihood method annually. We found that the Mc value is decreasing and produced a varying b-value. It indicates that the development of seismograph network from 2008 to 2016 can affect the value of Mc although it is not significant. We analyze temporal variation of Mc value, and correlate it with the spatial distribution of seismograph in Indonesia. The spatial distribution of seismograph installation shows that the western part of Indonesia has more dense seismograph compared to the eastern region. However, the eastern part of Indonesia has a high level of seismicity compared to the western region. Based upon the results, additional seismograph installation in the eastern part of Indonesia should be taken into consideration.

  9. Security for 5G Mobile Wireless Networks

    OpenAIRE

    Fang, Dongfeng; Qian, Yi; Qingyang Hu, Rose

    2017-01-01

    The advanced features of 5G mobile wireless network systems yield new security requirements and challenges. This paper presents a comprehensive survey on security of 5G wireless network systems compared to the traditional cellular networks. The paper starts with a review on 5G wireless networks particularities as well as on the new requirements and motivations of 5G wireless security. The potential attacks and security services with the consideration of new service requirements and new use ca...

  10. Survey and Evaluate Uncertainty Quantification Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  11. Future Network Architectures

    DEFF Research Database (Denmark)

    Wessing, Henrik; Bozorgebrahimi, Kurosh; Belter, Bartosz

    2015-01-01

    This study identifies key requirements for NRENs towards future network architectures that become apparent as users become more mobile and have increased expectations in terms of availability of data. In addition, cost saving requirements call for federated use of, in particular, the optical...

  12. How women organize social networks different from men.

    Science.gov (United States)

    Szell, Michael; Thurner, Stefan

    2013-01-01

    Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways.

  13. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    Science.gov (United States)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  14. Better sales networks.

    Science.gov (United States)

    Ustüner, Tuba; Godes, David

    2006-01-01

    Anyone in sales will tell you that social networks are critical. The more contacts you have, the more leads you'll generate, and, ultimately, the more sales you'll make. But that's a vast oversimplification. Different configurations of networks produce different results, and the salesperson who develops a nuanced understanding of social networks will outshine competitors. The salesperson's job changes over the course of the selling process. Different abilities are required in each stage of the sale: identifying prospects, gaining buy-in from potential customers, creating solutions, and closing the deal. Success in the first stage, for instance, depends on the salesperson acquiring precise and timely information about opportunities from contacts in the marketplace. Closing the deal requires the salesperson to mobilize contacts from prior sales to act as references. Managers often view sales networks only in terms of direct contacts. But someone who knows lots of people doesn't necessarily have an effective network because networks often pay off most handsomely through indirect contacts. Moreover, the density of the connections in a network is important. Do a salesperson's contacts know all the same people, or are their associates widely dispersed? Sparse networks are better, for example, at generating unique information. Managers can use three levers--sales force structure, compensation, and skills development--to encourage salespeople to adopt a network-based view and make the best possible use of social webs. For example, the sales force can be restructured to decouple lead generation from other tasks because some people are very good at building diverse ties but not so good at maintaining other kinds of networks. Companies that take steps of this kind to help their sales teams build better networks will reap tremendous advantages.

  15. Evolution of a residue laboratory network and the management tools for monitoring its performance.

    Science.gov (United States)

    Lins, E S; Conceição, E S; Mauricio, A De Q

    2012-01-01

    Since 2005 the National Residue & Contaminants Control Plan (NRCCP) in Brazil has been considerably enhanced, increasing the number of samples, substances and species monitored, and also the analytical detection capability. The Brazilian laboratory network was forced to improve its quality standards in order to comply with the NRCP's own evolution. Many aspects such as the limits of quantification (LOQs), the quality management systems within the laboratories and appropriate method validation are in continuous improvement, generating new scenarios and demands. Thus, efficient management mechanisms for monitoring network performance and its adherence to the established goals and guidelines are required. Performance indicators associated to computerised information systems arise as a powerful tool to monitor the laboratories' activity, making use of different parameters to describe this activity on a day-to-day basis. One of these parameters is related to turnaround times, and this factor is highly affected by the way each laboratory organises its management system, as well as the regulatory requirements. In this paper a global view is presented of the turnaround times related to the type of analysis, laboratory, number of samples per year, type of matrix, country region and period of the year, all these data being collected from a computerised system called SISRES. This information gives a solid background to management measures aiming at the improvement of the service offered by the laboratory network.

  16. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  17. Quantification Methods of Management Skills in Shipping

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2012-04-01

    Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.

  18. Recurrence quantification analysis of global stock markets

    Science.gov (United States)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  19. Recurrence quantification analysis theory and best practices

    CERN Document Server

    Jr, Jr; Marwan, Norbert

    2015-01-01

    The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field.  Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines.   This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis.  Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.

  20. Convex geometry of quantum resource quantification

    Science.gov (United States)

    Regula, Bartosz

    2018-01-01

    We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof-extended negativity, a measure of k-coherence which generalises the \

  1. Kinetic quantification of plyometric exercise intensity.

    Science.gov (United States)

    Ebben, William P; Fauth, McKenzie L; Garceau, Luke R; Petushek, Erich J

    2011-12-01

    Ebben, WP, Fauth, ML, Garceau, LR, and Petushek, EJ. Kinetic quantification of plyometric exercise intensity. J Strength Cond Res 25(12): 3288-3298, 2011-Quantification of plyometric exercise intensity is necessary to understand the characteristics of these exercises and the proper progression of this mode of exercise. The purpose of this study was to assess the kinetic characteristics of a variety of plyometric exercises. This study also sought to assess gender differences in these variables. Twenty-six men and 23 women with previous experience in performing plyometric training served as subjects. The subjects performed a variety of plyometric exercises including line hops, 15.24-cm cone hops, squat jumps, tuck jumps, countermovement jumps (CMJs), loaded CMJs equal to 30% of 1 repetition maximum squat, depth jumps normalized to the subject's jump height (JH), and single leg jumps. All plyometric exercises were assessed with a force platform. Outcome variables associated with the takeoff, airborne, and landing phase of each plyometric exercise were evaluated. These variables included the peak vertical ground reaction force (GRF) during takeoff, the time to takeoff, flight time, JH, peak power, landing rate of force development, and peak vertical GRF during landing. A 2-way mixed analysis of variance with repeated measures for plyometric exercise type demonstrated main effects for exercise type and all outcome variables (p ≤ 0.05) and for the interaction between gender and peak vertical GRF during takeoff (p ≤ 0.05). Bonferroni-adjusted pairwise comparisons identified a number of differences between the plyometric exercises for the outcome variables assessed (p ≤ 0.05). These findings can be used to guide the progression of plyometric training by incorporating exercises of increasing intensity over the course of a program.

  2. Quantification of heterogeneity observed in medical images

    International Nuclear Information System (INIS)

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity

  3. Quantification of heterogeneity observed in medical images.

    Science.gov (United States)

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  4. Networking at NASA. Johnson Space Center

    Science.gov (United States)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  5. NASA Integrated Network COOP

    Science.gov (United States)

    Anderson, Michael L.; Wright, Nathaniel; Tai, Wallace

    2012-01-01

    Natural disasters, terrorist attacks, civil unrest, and other events have the potential of disrupting mission-essential operations in any space communications network. NASA's Space Communications and Navigation office (SCaN) is in the process of studying options for integrating the three existing NASA network elements, the Deep Space Network, the Near Earth Network, and the Space Network, into a single integrated network with common services and interfaces. The need to maintain Continuity of Operations (COOP) after a disastrous event has a direct impact on the future network design and operations concepts. The SCaN Integrated Network will provide support to a variety of user missions. The missions have diverse requirements and include anything from earth based platforms to planetary missions and rovers. It is presumed that an integrated network, with common interfaces and processes, provides an inherent advantage to COOP in that multiple elements and networks can provide cross-support in a seamless manner. The results of trade studies support this assumption but also show that centralization as a means of achieving integration can result in single points of failure that must be mitigated. The cost to provide this mitigation can be substantial. In support of this effort, the team evaluated the current approaches to COOP, developed multiple potential approaches to COOP in a future integrated network, evaluated the interdependencies of the various approaches to the various network control and operations options, and did a best value assessment of the options. The paper will describe the trade space, the study methods, and results of the study.

  6. Ionic network analysis of tectosilicates: the example of coesite at variable pressure.

    Science.gov (United States)

    Reifenberg, Melina; Thomas, Noel W

    2018-04-01

    The method of ionic network analysis [Thomas (2017). Acta Cryst. B73, 74-86] is extended to tectosilicates through the example of coesite, the high-pressure polymorph of SiO 2 . The structural refinements of Černok et al. [Z. Kristallogr. (2014), 229, 761-773] are taken as the starting point for applying the method. Its purpose is to predict the unit-cell parameters and atomic coordinates at (p-T-X) values in-between those of diffraction experiments. The essential development step for tectosilicates is to define a pseudocubic parameterization of the O 4 cages of the SiO 4 tetrahedra. The six parameters a PC , b PC , c PC , α PC , β PC and γ PC allow a full quantification of the tetrahedral structure, i.e. distortion and enclosed volume. Structural predictions for coesite require that two separate quasi-planar networks are defined, one for the silicon ions and the other for the O 4 cage midpoints. A set of parametric curves is used to describe the evolution with pressure of these networks and the pseudocubic parameters. These are derived by fitting to the crystallographic data. Application of the method to monoclinic feldspars and to quartz and cristobalite is discussed. Further, a novel two-parameter quantification of the degree of tetrahedral distortion is described. At pressures in excess of ca 20.45 GPa it is not possible to find a self-consistent solution to the parametric curves for coesite, pointing to the likelihood of a phase transition.

  7. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  8. Circuit switched optical networks

    DEFF Research Database (Denmark)

    Kloch, Allan

    2003-01-01

    Some of the most important components required for enabling optical networking are investigated through both experiments and modelling. These all-optical components are the wavelength converter, the regenerator and the space switch. When these devices become "off-the-shelf" products, optical cross......, it is expected that the optical solution will offer an economical benefit for hight bit rate networks. This thesis begins with a discussion of the expected impact on communications systems from the rapidly growing IP traffic, which is expected to become the dominant source for traffic. IP traffic has some...... characteristics, which are best supported by an optical network. The interest for such an optical network is exemplified by the formation of the ACTS OPEN project which aim was to investigate the feasibility of an optical network covering Europe. Part of the work presented in this thesis is carried out within...

  9. Maintenance of family networks

    DEFF Research Database (Denmark)

    marsico, giuseppina; Chaudhary, N; Valsiner, Jaan

    2015-01-01

    Families are social units that expand in time (across generations) and space (as a geographically distributed sub-structures of wider kinship networks). Understanding of intergenerational family relations thus requires conceptualization of communication processes that take place within a small...... collective of persons linked with one another by a flexible social network. Within such networks, Peripheral Communication Patterns set the stage for direct everyday life activities within the family context. Peripheral Communication Patterns are conditions where one family network member (A) communicates...... manifestly with another member (B) with the aim of bringing the communicative message to the third member (C) who is present but is not explicitly designated as the manifest addressee of the intended message. Inclusion of physically non-present members of the family network (elders living elsewhere, deceased...

  10. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome

    Directory of Open Access Journals (Sweden)

    Dewey Colin N

    2011-08-01

    Full Text Available Abstract Background RNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments. Results We present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene. Conclusions RSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost

  11. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  12. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Imran Babar

    Full Text Available Value-based requirements engineering plays a vital role in the development of value-based software (VBS. Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  13. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  14. Software defined access networks

    OpenAIRE

    Maricato, José Miguel Duarte

    2016-01-01

    With the increase of internet usage and the exponential growth of bandwidth consumption due to the increasing number of users of new generation equipments and the creation of new services that consume increasingly higher bandwidths, it's necessary to nd solutions to meet these new requirements. Passive optical networks (PONs) promise to solve these problems by providing a better service to users and providers. PON networks are very attractive since they don't depend on a...

  15. Temporal networks

    CERN Document Server

    Saramäki, Jari

    2013-01-01

    The concept of temporal networks is an extension of complex networks as a modeling framework to include information on when interactions between nodes happen. Many studies of the last decade examine how the static network structure affect dynamic systems on the network. In this traditional approach  the temporal aspects are pre-encoded in the dynamic system model. Temporal-network methods, on the other hand, lift the temporal information from the level of system dynamics to the mathematical representation of the contact network itself. This framework becomes particularly useful for cases where there is a lot of structure and heterogeneity both in the timings of interaction events and the network topology. The advantage compared to common static network approaches is the ability to design more accurate models in order to explain and predict large-scale dynamic phenomena (such as, e.g., epidemic outbreaks and other spreading phenomena). On the other hand, temporal network methods are mathematically and concept...

  16. Interconnected networks

    CERN Document Server

    2016-01-01

    This volume provides an introduction to and overview of the emerging field of interconnected networks which include multi layer or multiplex networks, as well as networks of networks. Such networks present structural and dynamical features quite different from those observed in isolated networks. The presence of links between different networks or layers of a network typically alters the way such interconnected networks behave – understanding the role of interconnecting links is therefore a crucial step towards a more accurate description of real-world systems. While examples of such dissimilar properties are becoming more abundant – for example regarding diffusion, robustness and competition – the root of such differences remains to be elucidated. Each chapter in this topical collection is self-contained and can be read on its own, thus making it also suitable as reference for experienced researchers wishing to focus on a particular topic.

  17. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  18. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography

    Science.gov (United States)

    Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2018-01-01

    We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301

  19. VRML metabolic network visualizer.

    Science.gov (United States)

    Rojdestvenski, Igor

    2003-03-01

    A successful date collection visualization should satisfy a set of many requirements: unification of diverse data formats, support for serendipity research, support of hierarchical structures, algorithmizability, vast information density, Internet-readiness, and other. Recently, virtual reality has made significant progress in engineering, architectural design, entertainment and communication. We experiment with the possibility of using the immersive abstract three-dimensional visualizations of the metabolic networks. We present the trial Metabolic Network Visualizer software, which produces graphical representation of a metabolic network as a VRML world from a formal description written in a simple SGML-type scripting language.

  20. Evolving production network structures

    DEFF Research Database (Denmark)

    Grunow, Martin; Gunther, H.O.; Burdenik, H.

    2007-01-01

    When deciding about future production network configurations, the current structures have to be taken into account. Further, core issues such as the maturity of the products and the capacity requirements for test runs and ramp-ups must be incorporated. Our approach is based on optimization...... modelling and assigns products and capacity expansions to production sites under the above constraints. It also considers the production complexity at the individual sites and the flexibility of the network. Our implementation results for a large manufacturing network reveal substantial possible cost...

  1. Optical storage networking

    Science.gov (United States)

    Mohr, Ulrich

    2001-11-01

    For efficient business continuance and backup of mission- critical data an inter-site storage network is required. Where traditional telecommunications costs are prohibitive for all but the largest organizations, there is an opportunity for regional carries to deliver an innovative storage service. This session reveals how a combination of optical networking and protocol-aware SAN gateways can provide an extended storage networking platform with the lowest cost of ownership and the highest possible degree of reliability, security and availability. Companies of every size, with mainframe and open-systems environments, can afford to use this integrated service. Three mayor applications are explained; channel extension, Network Attached Storage (NAS), Storage Area Networks (SAN) and how optical networks address the specific requirements. One advantage of DWDM is the ability for protocols such as ESCON, Fibre Channel, ATM and Gigabit Ethernet, to be transported natively and simultaneously across a single fiber pair, and the ability to multiplex many individual fiber pairs over a single pair, thereby reducing fiber cost and recovering fiber pairs already in use. An optical storage network enables a new class of service providers, Storage Service Providers (SSP) aiming to deliver value to the enterprise by managing storage, backup, replication and restoration as an outsourced service.

  2. Automated quantification of epicardial adipose tissue using CT angiography: evaluation of a prototype software

    International Nuclear Information System (INIS)

    Spearman, James V.; Silverman, Justin R.; Krazinski, Aleksander W.; Costello, Philip; Meinel, Felix G.; Geyer, Lucas L.; Schoepf, U.J.; Apfaltrer, Paul; Canstein, Christian; De Cecco, Carlo Nicola

    2014-01-01

    This study evaluated the performance of a novel automated software tool for epicardial fat volume (EFV) quantification compared to a standard manual technique at coronary CT angiography (cCTA). cCTA data sets of 70 patients (58.6 ± 12.9 years, 33 men) were retrospectively analysed using two different post-processing software applications. Observer 1 performed a manual single-plane pericardial border definition and EFV M segmentation (manual approach). Two observers used a software program with fully automated 3D pericardial border definition and EFV A calculation (automated approach). EFV and time required for measuring EFV (including software processing time and manual optimization time) for each method were recorded. Intraobserver and interobserver reliability was assessed on the prototype software measurements. T test, Spearman's rho, and Bland-Altman plots were used for statistical analysis. The final EFV A (with manual border optimization) was strongly correlated with the manual axial segmentation measurement (60.9 ± 33.2 mL vs. 65.8 ± 37.0 mL, rho = 0.970, P 0.9). Automated EFV A quantification is an accurate and time-saving method for quantification of EFV compared to established manual axial segmentation methods. (orig.)

  3. Spatially resolved quantification of agrochemicals on plant surfaces using energy dispersive X-ray microanalysis.

    Science.gov (United States)

    Hunsche, Mauricio; Noga, Georg

    2009-12-01

    In the present study the principle of energy dispersive X-ray microanalysis (EDX), i.e. the detection of elements based on their characteristic X-rays, was used to localise and quantify organic and inorganic pesticides on enzymatically isolated fruit cuticles. Pesticides could be discriminated from the plant surface because of their distinctive elemental composition. Findings confirm the close relation between net intensity (NI) and area covered by the active ingredient (AI area). Using wide and narrow concentration ranges of glyphosate and glufosinate, respectively, results showed that quantification of AI requires the selection of appropriate regression equations while considering NI, peak-to-background (P/B) ratio, and AI area. The use of selected internal standards (ISs) such as Ca(NO(3))(2) improved the accuracy of the quantification slightly but led to the formation of particular, non-typical microstructured deposits. The suitability of SEM-EDX as a general technique to quantify pesticides was evaluated additionally on 14 agrochemicals applied at diluted or regular concentration. Among the pesticides tested, spatial localisation and quantification of AI amount could be done for inorganic copper and sulfur as well for the organic agrochemicals glyphosate, glufosinate, bromoxynil and mancozeb. (c) 2009 Society of Chemical Industry.

  4. Human DNA quantification and sample quality assessment: Developmental validation of the PowerQuant(®) system.

    Science.gov (United States)

    Ewing, Margaret M; Thompson, Jonelle M; McLaren, Robert S; Purpero, Vincent M; Thomas, Kelli J; Dobrowski, Patricia A; DeGroot, Gretchen A; Romsos, Erica L; Storts, Douglas R

    2016-07-01

    Quantification of the total amount of human DNA isolated from a forensic evidence item is crucial for DNA normalization prior to short tandem repeat (STR) DNA analysis and a federal quality assurance standard requirement. Previous commercial quantification methods determine the total human DNA and total human male DNA concentrations, but provide limited information about the condition of the DNA sample. The PowerQuant(®) System includes targets for quantification of total human and total human male DNA as well as targets for evaluating whether the human DNA is degraded and/or PCR inhibitors are present in the sample. A developmental validation of the PowerQuant(®) System was completed, following SWGDAM Validation Guidelines, to evaluate the assay's specificity, sensitivity, precision and accuracy, as well as the ability to detect degraded DNA or PCR inhibitors. In addition to the total human DNA and total human male DNA concentrations in a sample, data from the degradation target and internal PCR control (IPC) provide a forensic DNA analyst meaningful information about the quality of the isolated human DNA and the presence of PCR inhibitors in the sample that can be used to determine the most effective workflow and assist downstream interpretation. Copyright © 2016 The Author(s). Published by Elsevier Ireland Ltd.. All rights reserved.

  5. Quantification of DNA in Neonatal Dried Blood Spots by Adenine Tandem Mass Spectrometry.

    Science.gov (United States)

    Durie, Danielle; Yeh, Ed; McIntosh, Nathan; Fisher, Lawrence; Bulman, Dennis E; Birnboim, H Chaim; Chakraborty, Pranesh; Al-Dirbashi, Osama Y

    2018-01-02

    Newborn screening programs have expanded to include molecular-based assays as first-tier tests and the success of these assays depends on the quality and yield of DNA extracted from neonatal dried blood spots (DBS). To meet high throughput and rapid turnaround time requirements, newborn screening laboratories adopted rapid DNA extraction methods that produce crude extracts. Quantification of DNA in neonatal DBS is not routinely performed due to technical challenges; however, this may enhance the performance of assays that are sensitive to amounts of input DNA. In this study, we developed a novel high throughput method to quantify total DNA in DBS. It is based on specific acid-catalyzed depurination of DNA followed by mass spectrometric quantification of adenine. The amount of adenine was used to calculate DNA quantity per 3.2 mm DBS. Reference intervals were established using archived, neonatal DBS (n = 501) and a median of 130.6 ng of DNA per DBS was obtained, which is in agreement with literature values. The intra- and interday variations were quantification were 12.5 and 37.8 nmol/L adenine, respectively. We demonstrated that DNA from neonatal DBS can be successfully quantified in high throughput settings using instruments currently deployed in NBS laboratories.

  6. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  7. Absolute Quantification of Toxicological Biomarkers via Mass Spectrometry.

    Science.gov (United States)

    Lau, Thomas Y K; Collins, Ben C; Stone, Peter; Tang, Ning; Gallagher, William M; Pennington, Stephen R

    2017-01-01

    With the advent of "-omics" technologies there has been an explosion of data generation in the field of toxicology, as well as many others. As new candidate biomarkers of toxicity are being regularly discovered, the next challenge is to validate these observations in a targeted manner. Traditionally, these validation experiments have been conducted using antibody-based technologies such as Western blotting, ELISA, and immunohistochemistry. However, this often produces a significant bottleneck as the time, cost, and development of successful antibodies are often far outpaced by the generation of targets of interest. In response to this, there recently have been several developments in the use of triple quadrupole (QQQ) mass spectrometry (MS) as a platform to provide quantification of proteins. This technology does not require antibodies; it is typically less expensive and quicker to develop assays and has the opportunity for more accessible multiplexing. The speed of these experiments combined with their flexibility and ability to multiplex assays makes the technique a valuable strategy to validate biomarker discovery.

  8. Forensic Uncertainty Quantification of Explosive Dispersal of Particles

    Science.gov (United States)

    Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho

    2017-06-01

    In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.

  9. Towards an uncertainty quantification methodology with CASMO-5

    International Nuclear Information System (INIS)

    Wieselquist, W.; Vasiliev, A.; Ferroukhi, H.

    2011-01-01

    We present the development of an uncertainty quantification (UQ) methodology for the CASMO-5 lattice physics code, used extensively at the Paul Scherrer Institut for standalone neutronics calculations, as well as the generation of nuclear fuel segment libraries for the downstream core simulator, SIMULATE-3. We focus here on propagation of nuclear data uncertainties and describe the framework required for 'black box' UQ--in this case minor modifications of the code are necessary to allow perturbation of the CASMO-5 nuclear data library. We then implement a basic rst-order UQ method, direct perturbation, which directly produces sensitivity coefficients and when folded with the input nuclear data variance-covariance matrix (VCM) yields output uncertainties in the form of an output VCM. We discuss the implementation, including how to map the VCMs of a different group structure to the code library group structure (in our case the ENDF/B-VII-based 586-group library in CASMO-5), present some results for pin cell calculations, and conclude with future work. (author)

  10. Coarse graining for synchronization in directed networks

    Science.gov (United States)

    Zeng, An; Lü, Linyuan

    2011-05-01

    Coarse-graining model is a promising way to analyze and visualize large-scale networks. The coarse-grained networks are required to preserve statistical properties as well as the dynamic behaviors of the initial networks. Some methods have been proposed and found effective in undirected networks, while the study on coarse-graining directed networks lacks of consideration. In this paper we proposed a path-based coarse-graining (PCG) method to coarse grain the directed networks. Performing the linear stability analysis of synchronization and numerical simulation of the Kuramoto model on four kinds of directed networks, including tree networks and variants of Barabási-Albert networks, Watts-Strogatz networks, and Erdös-Rényi networks, we find our method can effectively preserve the network synchronizability.

  11. Network maintenance

    CERN Multimedia

    GS Department

    2009-01-01

    A site-wide network maintenance operation has been scheduled for Saturday 28 February. Most of the network devices of the general purpose network will be upgraded to a newer software version, in order to improve our network monitoring capabilities. This will result in a series of short (2-5 minutes) random interruptions everywhere on the CERN sites throughout the day. This upgrade will not affect the Computer Centre itself, Building 613, the Technical Network and the LHC experiments, dedicated networks at the pits. For further details of this intervention, please contact Netops by phone 74927 or e-mail mailto:Netops@cern.ch. IT/CS Group

  12. Network maintenance

    CERN Multimedia

    IT Department

    2009-01-01

    A site wide network maintenance has been scheduled for Saturday 28 February. Most of the network devices of the General Purpose network will be upgraded to a newer software version, in order to improve our network monitoring capabilities. This will result in a series of short (2-5 minutes) random interruptions everywhere on the CERN sites along this day. This upgrade will not affect: the Computer centre itself, building 613, the Technical Network and the LHC experiments dedicated networks at the pits. Should you need more details on this intervention, please contact Netops by phone 74927 or email mailto:Netops@cern.ch. IT/CS Group

  13. Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed

  14. Development of hydrate risk quantification in oil and gas production

    Science.gov (United States)

    Chaudhari, Piyush N.

    order to reduce the parametric study that may require a long duration of time using The Colorado School of Mines Hydrate Kinetic Model (CSMHyK). The evolution of the hydrate plugging risk along flowline-riser systems is modeled for steady state and transient operations considering the effect of several critical parameters such as oil-hydrate slip, duration of shut-in, and water droplet size on a subsea tieback system. This research presents a novel platform for quantification of the hydrate plugging risk, which in-turn will play an important role in improving and optimizing current hydrate management strategies. The predictive strength of the hydrate risk quantification and hydrate prediction models will have a significant impact on flow assurance engineering and design with respect to building safe and efficient hydrate management techniques for future deep-water developments.

  15. Computer-assisted radiological quantification of rheumatoid arthritis

    International Nuclear Information System (INIS)

    Peloschek, P.L.

    2000-03-01

    Specific objective was to develop the layout and structure of a platform for effective quantification of rheumatoid arthritis (RA). A fully operative Java stand-alone application software (RheumaCoach) was developed to support the efficacy of the scoring process in RA (Web address: http://www.univie.ac.at/radio/radio.htm). Addressed as potential users of such a program are physicians enrolled in clinical trials to evaluate the course of RA and its modulation with drug therapies and scientists developing new scoring modalities. The software 'RheumaCoach' consists of three major modules: The Tutorial starts with 'Rheumatoid Arthritis', to teach the basic pathology of the disease. Afterwards the section 'Imaging Standards' explains how to produce proper radiographs. 'Principles - How to use the 'Larsen Score', 'Radiographic Findings' and 'Quantification by Scoring' explain the requirements for unbiased scoring of RA. At the Data Input Sheet care was taken to follow the radiologist's approach in analysing films as published previously. At the compute sheet the calculated Larsen-Score may be compared with former scores and the further possibilities (calculate, export, print, send) are easily accessible. In a first pre-clinical study the system was tested in an unstructured. Two structured evaluations (30 fully documented and blinded cases of RA, four radiologists scored hands and feet with or without the RheumaCoach) followed. Between the evaluations we permanently improved the software. For all readers the usage of the RheumaCoach fastened the procedure, all together the scoring without computer-assistance needed about 20 % percent more time. Availability of the programme via the internet provides common access for potential quality control in multi-center studies. Documentation of results in a specifically designed printout improves communication between radiologists and rheumatologists. The possibilities of direct export to other programmes and electronic

  16. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  17. Evolution of metabolic network organization

    Directory of Open Access Journals (Sweden)

    Bonchev Danail

    2010-05-01

    Full Text Available Abstract Background Comparison of metabolic networks across species is a key to understanding how evolutionary pressures shape these networks. By selecting taxa representative of different lineages or lifestyles and using a comprehensive set of descriptors of the structure and complexity of their metabolic networks, one can highlight both qualitative and quantitative differences in the metabolic organization of species subject to distinct evolutionary paths or environmental constraints. Results We used a novel representation of metabolic networks, termed network of interacting pathways or NIP, to focus on the modular, high-level organization of the metabolic capabilities of the cell. Using machine learning techniques we identified the most relevant aspects of cellular organization that change under evolutionary pressures. We considered the transitions from prokarya to eukarya (with a focus on the transitions among the archaea, bacteria and eukarya, from unicellular to multicellular eukarya, from free living to host-associated bacteria, from anaerobic to aerobic, as well as the acquisition of cell motility or growth in an environment of various levels of salinity or temperature. Intuitively, we expect organisms with more complex lifestyles to have more complex and robust metabolic networks. Here we demonstrate for the first time that such organisms are not only characterized by larger, denser networks of metabolic pathways but also have more efficiently organized cross communications, as revealed by subtle changes in network topology. These changes are unevenly distributed among metabolic pathways, with specific categories of pathways being promoted to more central locations as an answer to environmental constraints. Conclusions Combining methods from graph theory and machine learning, we have shown here that evolutionary pressures not only affects gene and protein sequences, but also specific details of the complex wiring of functional modules

  18. The dimensionality of ecological networks

    DEFF Research Database (Denmark)

    Eklöf, Anna; Jacob, Ute; Kopp, Jason

    2013-01-01

    How many dimensions (trait-axes) are required to predict whether two species interact? This unanswered question originated with the idea of ecological niches, and yet bears relevance today for understanding what determines network structure. Here, we analyse a set of 200 ecological networks......, including food webs, antagonistic and mutualistic networks, and find that the number of dimensions needed to completely explain all interactions is small (... the most to explaining network structure. We show that accounting for a few traits dramatically improves our understanding of the structure of ecological networks. Matching traits for resources and consumers, for example, fruit size and bill gape, are the most successful combinations. These results link...

  19. Network Ambivalence

    Directory of Open Access Journals (Sweden)

    Patrick Jagoda

    2015-08-01

    Full Text Available The language of networks now describes everything from the Internet to the economy to terrorist organizations. In distinction to a common view of networks as a universal, originary, or necessary form that promises to explain everything from neural structures to online traffic, this essay emphasizes the contingency of the network imaginary. Network form, in its role as our current cultural dominant, makes scarcely imaginable the possibility of an alternative or an outside uninflected by networks. If so many things and relationships are figured as networks, however, then what is not a network? If a network points towards particular logics and qualities of relation in our historical present, what others might we envision in the future? In  many ways, these questions are unanswerable from within the contemporary moment. Instead of seeking an avant-garde approach (to move beyond networks or opting out of networks (in some cases, to recover elements of pre-networked existence, this essay proposes a third orientation: one of ambivalence that operates as a mode of extreme presence. I propose the concept of "network aesthetics," which can be tracked across artistic media and cultural forms, as a model, style, and pedagogy for approaching interconnection in the twenty-first century. The following essay is excerpted from Network Ambivalence (Forthcoming from University of Chicago Press. 

  20. Network workshop

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry

    2014-01-01

    This paper describes the background for, realisation of and author reflections on a network workshop held at ESERA2013. As a new research area in science education, networks offer a unique opportunity to visualise and find patterns and relationships in complicated social or academic network data....... These include student relations and interactions and epistemic and linguistic networks of words, concepts and actions. Network methodology has already found use in science education research. However, while networks hold the potential for new insights, they have not yet found wide use in the science education...... research community. With this workshop, participants were offered a way into network science based on authentic educational research data. The workshop was constructed as an inquiry lesson with emphasis on user autonomy. Learning activities had participants choose to work with one of two cases of networks...

  1. Network Convergence

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Network Convergence. User is interested in application and content - not technical means of distribution. Boundaries between distribution channels fade out. Network convergence leads to seamless application and content solutions.

  2. Industrial Networks

    DEFF Research Database (Denmark)

    Karlsson, Christer

    2015-01-01

    Companies organize in a way that involves many activities that are external to the traditional organizational boundaries. This presents challenges to operations management and managing operations involves many issues and actions dealing with external networks. Taking a network perspective changes...

  3. Network Science

    National Research Council Canada - National Science Library

    Leland, Will

    2006-01-01

    OVERVIEW: (1) A committee of technical experts, military officers and R&D managers was assembled by the National Research Council to reach consensus on the nature of networks and network research. (2...

  4. Security Shift in Future Network Architectures

    NARCIS (Netherlands)

    Hartog, T.; Schotanus, H.A.; Verkoelen, C.A.A.

    2010-01-01

    In current practice military communication infrastructures are deployed as stand-alone networked information systems. Network-Enabled Capabilities (NEC) and combined military operations lead to new requirements which current communication architectures cannot deliver. This paper informs IT

  5. Quantification of Uncertainties in Integrated Spacecraft System Models, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort is to investigate a novel uncertainty quantification (UQ) approach based on non-intrusive polynomial chaos (NIPC) for computationally efficient...

  6. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.; Adams, M.L.; McClarren, R.G.; Mallick, B.K.

    2011-01-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework

  7. Direct quantification of negatively charged functional groups on membrane surfaces

    KAUST Repository

    Tiraferri, Alberto; Elimelech, Menachem

    2012-01-01

    groups at the surface of dense polymeric membranes. Both techniques consist of associating the membrane surface moieties with chemical probes, followed by quantification of the bound probes. Uranyl acetate and toluidine blue O dye, which interact

  8. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    International Nuclear Information System (INIS)

    Olander, Lydia P; Wollenberg, Eva; Tubiello, Francesco N; Herold, Martin

    2014-01-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term. (paper)

  9. A Micropillar Compression Methodology for Ductile Damage Quantification

    NARCIS (Netherlands)

    Tasan, C.C.; Hoefnagels, J.P.M.; Geers, M.G.D.

    2012-01-01

    Microstructural damage evolution is reported to influence significantly the failures of new high-strength alloys. Its accurate quantification is, therefore, critical for (1) microstructure optimization and (2) continuum damage models to predict failures of these materials. As existing methodologies

  10. Multi data reservior history matching and uncertainty quantification framework

    KAUST Repository

    Katterbauer, Klemens; Hoteit, Ibrahim; Sun, Shuyu

    2015-01-01

    A multi-data reservoir history matching and uncertainty quantification framework is provided. The framework can utilize multiple data sets such as production, seismic, electromagnetic, gravimetric and surface deformation data for improving

  11. A micropillar compression methodology for ductile damage quantification

    NARCIS (Netherlands)

    Tasan, C.C.; Hoefnagels, J.P.M.; Geers, M.G.D.

    2012-01-01

    Microstructural damage evolution is reported to influence significantly the failures of new high-strength alloys. Its accurate quantification is, therefore, critical for (1) microstructure optimization and (2) continuum damage models to predict failures of these materials. As existing methodologies

  12. The value of serum Hepatitis B surface antigen quantification in ...

    African Journals Online (AJOL)

    The value of serum Hepatitis B surface antigen quantification in determining viralactivity in chronic Hepatitis B virus infection. ... ofCHB andalso higher in hepatitis e antigen positive patients compared to hepatitis e antigen negative patients.

  13. an expansion of the aboveground biomass quantification model for ...

    African Journals Online (AJOL)

    Research Note BECVOL 3: an expansion of the aboveground biomass quantification model for ... African Journal of Range and Forage Science ... encroachment and estimation of food to browser herbivore species, was proposed during 1989.

  14. (1) H-MRS processing parameters affect metabolite quantification

    DEFF Research Database (Denmark)

    Bhogal, Alex A; Schür, Remmelt R; Houtepen, Lotte C

    2017-01-01

    investigated the influence of model parameters and spectral quantification software on fitted metabolite concentration values. Sixty spectra in 30 individuals (repeated measures) were acquired using a 7-T MRI scanner. Data were processed by four independent research groups with the freedom to choose their own...... + NAAG/Cr + PCr and Glu/Cr + PCr, respectively. Metabolite quantification using identical (1) H-MRS data was influenced by processing parameters, basis sets and software choice. Locally preferred processing choices affected metabolite quantification, even when using identical software. Our results......Proton magnetic resonance spectroscopy ((1) H-MRS) can be used to quantify in vivo metabolite levels, such as lactate, γ-aminobutyric acid (GABA) and glutamate (Glu). However, there are considerable analysis choices which can alter the accuracy or precision of (1) H-MRS metabolite quantification...

  15. Dynamic spectrum management in green cognitive radio cellular networks

    KAUST Repository

    Sboui, Lokman; Ghazzai, Hakim; Rezki, Zouheir; Alouini, Mohamed-Slim

    2018-01-01

    In this paper, we propose a new cellular network operation scheme fulfilling the 5G requirements related to spectrum management and green communications. We focus on cognitive radio cellular networks in which both the primary network (PN

  16. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Roetter, Daniel Enrique Lucani

    2015-01-01

    Software Defined Networking (SDN) and Network Coding (NC) are two key concepts in networking that have garnered a large attention in recent years. On the one hand, SDN's potential to virtualize services in the Internet allows a large flexibility not only for routing data, but also to manage....... This paper advocates for the use of SDN to bring about future Internet and 5G network services by incorporating network coding (NC) functionalities. The inherent flexibility of both SDN and NC provides a fertile ground to envision more efficient, robust, and secure networking designs, that may also...

  17. Quantification Model for Estimating Temperature Field Distributions of Apple Fruit

    OpenAIRE

    Zhang , Min; Yang , Le; Zhao , Huizhong; Zhang , Leijie; Zhong , Zhiyou; Liu , Yanling; Chen , Jianhua

    2009-01-01

    International audience; A quantification model of transient heat conduction was provided to simulate apple fruit temperature distribution in the cooling process. The model was based on the energy variation of apple fruit of different points. It took into account, heat exchange of representative elemental volume, metabolism heat and external heat. The following conclusions could be obtained: first, the quantification model can satisfactorily describe the tendency of apple fruit temperature dis...

  18. Quantification of aortic regurgitation by magnetic resonance velocity mapping

    DEFF Research Database (Denmark)

    Søndergaard, Lise; Lindvig, K; Hildebrandt, P

    1993-01-01

    The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients, and the regurgit......The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients...

  19. FRANX. Application for analysis and quantification of the APS fire

    International Nuclear Information System (INIS)

    Snchez, A.; Osorio, F.; Ontoso, N.

    2014-01-01

    The FRANX application has been developed by EPRI within the Risk and Reliability User Group in order to facilitate the process of quantification and updating APS Fire (also covers floods and earthquakes). By applying fire scenarios are quantified in the central integrating the tasks performed during the APS fire. This paper describes the main features of the program to allow quantification of an APS Fire. (Author)

  20. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  1. Tissue quantification for development of pediatric phantom

    International Nuclear Information System (INIS)

    Alves, A.F.F.; Miranda, J.R.A.; Pina, D.R.

    2013-01-01

    The optimization of the risk- benefit ratio is a major concern in the pediatric radiology, due to the greater vulnerability of children to the late somatic effects and genetic effects of exposure to radiation compared to adults. In Brazil, it is estimated that the causes of death from head trauma are 18 % for the age group between 1-5 years and the radiograph is the primary diagnostic test for the detection of skull fracture . Knowing that the image quality is essential to ensure the identification of structures anatomical and minimizing errors diagnostic interpretation, this paper proposed the development and construction of homogeneous phantoms skull, for the age group 1-5 years. The construction of the phantoms homogeneous was performed using the classification and quantification of tissue present in the skull of pediatric patients. In this procedure computational algorithms were used, using Matlab, to quantify distinct biological tissues present in the anatomical regions studied , using pictures retrospective CT scans. Preliminary data obtained from measurements show that between the ages of 1-5 years, assuming an average anteroposterior diameter of the pediatric skull region of the 145.73 ± 2.97 mm, can be represented by 92.34 mm ± 5.22 of lucite and 1.75 ± 0:21 mm of aluminum plates of a provision of PEP (Pacient equivalent phantom). After its construction, the phantoms will be used for image and dose optimization in pediatric protocols process to examinations of computerized radiography

  2. Uncertainty quantification in flood risk assessment

    Science.gov (United States)

    Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto

    2017-04-01

    Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.

  3. Cross recurrence quantification for cover song identification

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Joan; Serra, Xavier; Andrzejak, Ralph G [Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, 08018 Barcelona (Spain)], E-mail: joan.serraj@upf.edu

    2009-09-15

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  4. Verification Validation and Uncertainty Quantification for CGS

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kamm, James R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    The overall conduct of verification, validation and uncertainty quantification (VVUQ) is discussed through the construction of a workflow relevant to computational modeling including the turbulence problem in the coarse grained simulation (CGS) approach. The workflow contained herein is defined at a high level and constitutes an overview of the activity. Nonetheless, the workflow represents an essential activity in predictive simulation and modeling. VVUQ is complex and necessarily hierarchical in nature. The particular characteristics of VVUQ elements depend upon where the VVUQ activity takes place in the overall hierarchy of physics and models. In this chapter, we focus on the differences between and interplay among validation, calibration and UQ, as well as the difference between UQ and sensitivity analysis. The discussion in this chapter is at a relatively high level and attempts to explain the key issues associated with the overall conduct of VVUQ. The intention is that computational physicists can refer to this chapter for guidance regarding how VVUQ analyses fit into their efforts toward conducting predictive calculations.

  5. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  6. [Quantification of acetabular coverage in normal adult].

    Science.gov (United States)

    Lin, R M; Yang, C Y; Yu, C Y; Yang, C R; Chang, G L; Chou, Y L

    1991-03-01

    Quantification of acetabular coverage is important and can be expressed by superimposition of cartilage tracings on the maximum cross-sectional area of the femoral head. A practical Autolisp program on PC AutoCAD has been developed by us to quantify the acetabular coverage through numerical expression of the images of computed tomography. Thirty adults (60 hips) with normal center-edge angle and acetabular index in plain X ray were randomly selected for serial drops. These slices were prepared with a fixed coordination and in continuous sections of 5 mm in thickness. The contours of the cartilage of each section were digitized into a PC computer and processed by AutoCAD programs to quantify and characterize the acetabular coverage of normal and dysplastic adult hips. We found that a total coverage ratio of greater than 80%, an anterior coverage ratio of greater than 75% and a posterior coverage ratio of greater than 80% can be categorized in a normal group. Polar edge distance is a good indicator for the evaluation of preoperative and postoperative coverage conditions. For standardization and evaluation of acetabular coverage, the most suitable parameters are the total coverage ratio, anterior coverage ratio, posterior coverage ratio and polar edge distance. However, medial coverage and lateral coverage ratios are indispensable in cases of dysplastic hip because variations between them are so great that acetabuloplasty may be impossible. This program can also be used to classify precisely the type of dysplastic hip.

  7. On uncertainty quantification in hydrogeology and hydrogeophysics

    Science.gov (United States)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  8. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  9. Quantification of variability in trichome patterns

    Directory of Open Access Journals (Sweden)

    Bettina eGreese

    2014-11-01

    Full Text Available While pattern formation is studied in various areas of biology, little is known about the intrinsic noise leading to variations between individual realizations of the pattern. One prominent example for de novo pattern formation in plants is the patterning of trichomes on Arabidopsis leaves, which involves genetic regulation and cell-to-cell communication. These processes are potentially variable due to , e.g., the abundance of cell components or environmental conditions. To elevate the understanding of the regulatory processes underlying the pattern formation it is crucial to quantitatively analyze the variability in naturally occurring patterns. Here, we review recent approaches towards characterization of noise on trichome initiation. We present methods for the quantification of spatial patterns, which are the basis for data-driven mathematical modeling and enable the analysis of noise from different sources. Besides the insight gained on trichome formation, the examination of observed trichome patterns also shows that highly regulated biological processes can be substantially affected by variability.

  10. Cross recurrence quantification for cover song identification

    International Nuclear Information System (INIS)

    Serra, Joan; Serra, Xavier; Andrzejak, Ralph G

    2009-01-01

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  11. Quality Quantification of Evaluated Cross Section Covariances

    International Nuclear Information System (INIS)

    Varet, S.; Dossantos-Uzarralde, P.; Vayatis, N.

    2015-01-01

    Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the 85 Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations

  12. A User Driven Dynamic Circuit Network Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Guok, Chin; Robertson, David; Chaniotakis, Evangelos; Thompson, Mary; Johnston, William; Tierney, Brian

    2008-10-01

    The requirements for network predictability are becoming increasingly critical to the DoE science community where resources are widely distributed and collaborations are world-wide. To accommodate these emerging requirements, the Energy Sciences Network has established a Science Data Network to provide user driven guaranteed bandwidth allocations. In this paper we outline the design, implementation, and secure coordinated use of such a network, as well as some lessons learned.

  13. A security architecture for 5G networks

    OpenAIRE

    Arfaoui, Ghada; Bisson, Pascal; Blom, Rolf; Borgaonkar, Ravishankar; Englund, Håkan; Félix, Edith; Klaedtke, Felix; Nakarmi, Prajwol Kumar; Näslund, Mats; O’Hanlon, Piers; Papay, Juri; Suomalainen, Jani; Surridge, Mike; Wary, Jean-Philippe; Zahariev, Alexander

    2018-01-01

    5G networks will provide opportunities for the creation of new services, for new business models, and for new players to enter the mobile market. The networks will support efficient and cost-effective launch of a multitude of services, tailored for different vertical markets having varying service and security requirements, and involving a large number of actors. Key technology concepts are network slicing and network softwarisation, including network function virtualisation and software-defi...

  14. On the complex quantification of risk: systems-based perspective on terrorism.

    Science.gov (United States)

    Haimes, Yacov Y

    2011-08-01

    This article highlights the complexity of the quantification of the multidimensional risk function, develops five systems-based premises on quantifying the risk of terrorism to a threatened system, and advocates the quantification of vulnerability and resilience through the states of the system. The five premises are: (i) There exists interdependence between a specific threat to a system by terrorist networks and the states of the targeted system, as represented through the system's vulnerability, resilience, and criticality-impact. (ii) A specific threat, its probability, its timing, the states of the targeted system, and the probability of consequences can be interdependent. (iii) The two questions in the risk assessment process: "What is the likelihood?" and "What are the consequences?" can be interdependent. (iv) Risk management policy options can reduce both the likelihood of a threat to a targeted system and the associated likelihood of consequences by changing the states (including both vulnerability and resilience) of the system. (v) The quantification of risk to a vulnerable system from a specific threat must be built on a systemic and repeatable modeling process, by recognizing that the states of the system constitute an essential step to construct quantitative metrics of the consequences based on intelligence gathering, expert evidence, and other qualitative information. The fact that the states of all systems are functions of time (among other variables) makes the time frame pivotal in each component of the process of risk assessment, management, and communication. Thus, risk to a system, caused by an initiating event (e.g., a threat) is a multidimensional function of the specific threat, its probability and time frame, the states of the system (representing vulnerability and resilience), and the probabilistic multidimensional consequences. © 2011 Society for Risk Analysis.

  15. Simple and accurate quantification of BTEX in ambient air by SPME and GC-MS.

    Science.gov (United States)

    Baimatova, Nassiba; Kenessov, Bulat; Koziel, Jacek A; Carlsen, Lars; Bektassov, Marat; Demyanenko, Olga P

    2016-07-01

    Benzene, toluene, ethylbenzene and xylenes (BTEX) comprise one of the most ubiquitous and hazardous groups of ambient air pollutants of concern. Application of standard analytical methods for quantification of BTEX is limited by the complexity of sampling and sample preparation equipment, and budget requirements. Methods based on SPME represent simpler alternative, but still require complex calibration procedures. The objective of this research was to develop a simpler, low-budget, and accurate method for quantification of BTEX in ambient air based on SPME and GC-MS. Standard 20-mL headspace vials were used for field air sampling and calibration. To avoid challenges with obtaining and working with 'zero' air, slope factors of external standard calibration were determined using standard addition and inherently polluted lab air. For polydimethylsiloxane (PDMS) fiber, differences between the slope factors of calibration plots obtained using lab and outdoor air were below 14%. PDMS fiber provided higher precision during calibration while the use of Carboxen/PDMS fiber resulted in lower detection limits for benzene and toluene. To provide sufficient accuracy, the use of 20mL vials requires triplicate sampling and analysis. The method was successfully applied for analysis of 108 ambient air samples from Almaty, Kazakhstan. Average concentrations of benzene, toluene, ethylbenzene and o-xylene were 53, 57, 11 and 14µgm(-3), respectively. The developed method can be modified for further quantification of a wider range of volatile organic compounds in air. In addition, the new method is amenable to automation. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  17. A network model for characterizing brine channels in sea ice

    Science.gov (United States)

    Lieblappen, Ross M.; Kumar, Deip D.; Pauls, Scott D.; Obbard, Rachel W.

    2018-03-01

    The brine pore space in sea ice can form complex connected structures whose geometry is critical in the governance of important physical transport processes between the ocean, sea ice, and surface. Recent advances in three-dimensional imaging using X-ray micro-computed tomography have enabled the visualization and quantification of the brine network morphology and variability. Using imaging of first-year sea ice samples at in situ temperatures, we create a new mathematical network model to characterize the topology and connectivity of the brine channels. This model provides a statistical framework where we can characterize the pore networks via two parameters, depth and temperature, for use in dynamical sea ice models. Our approach advances the quantification of brine connectivity in sea ice, which can help investigations of bulk physical properties, such as fluid permeability, that are key in both global and regional sea ice models.

  18. Managerial Challenges Within Networks - Emphasizing the Paradox of Network Participation

    DEFF Research Database (Denmark)

    Jakobsen, Morten

    2003-01-01

    Flexibility and access to numerous resources are essential benefits associated with network participation. An important aspect of managing the network participation of a company is to maintain a dynamic portfolio of partners, and thereby keep up the strategic opportunities for development. However......, maintaining the dynamics within a network seems to be a complex challenge. There is a risk that the network ends up in The Paradox of Network Participation. The desired renewal and flexibility are not utilised because the involved parties preserve the existing networks structure consisting of the same...... and thereby sort out the paradox of network participation. Trust and information are mechanisms employed to absorb uncertainty. The relationship between trust and the requirement for information depends on the maturity of the relationship. When trust becomes too important as uncertainty absorption mechanism...

  19. Managerial challenges within networks: emphasizing the paradox of network participation

    DEFF Research Database (Denmark)

    Jakobsen, Morten

    Flexibility and access to numerous resources are essential benefits associated with network participation. An important aspect of managing the network participation of a company is to maintain a dynamic portfolio of partners, and thereby keep up the strategic opportunities for development. However......, maintaining the dynamics within a network seems to be a complex challenge. There is a risk that the network ends up in The Paradox of Network Participation. The desired renewal and flexibility are not utilised because the involved parties preserve the existing networks structure consisting of the same...... and thereby sort out the paradox of network participation. Trust and information are mechanisms employed to absorb uncertainty. The relationship between trust and the requirement for information depends on the maturity of the relationship. When trust becomes too important as uncertainty absorption mechanism...

  20. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimise the management of the Technical Network (TN), to facilitate understanding of the purpose of devices connected to the TN and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive e-mails from IT/CS asking them to add the corresponding information in the network database at "network-cern-ch". Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  1. Validation of methods for the detection and quantification of engineered nanoparticles in food

    DEFF Research Database (Denmark)

    Linsinger, T.P.J.; Chaudhry, Q.; Dehalu, V.

    2013-01-01

    the methods apply equally well to particles of different suppliers. In trueness testing, information whether the particle size distribution has changed during analysis is required. Results are largely expected to follow normal distributions due to the expected high number of particles. An approach...... approach for the validation of methods for detection and quantification of nanoparticles in food samples. It proposes validation of identity, selectivity, precision, working range, limit of detection and robustness, bearing in mind that each “result” must include information about the chemical identity...

  2. Standardization and quantification in FDG-PET/CT imaging for staging and restaging of malignant disease.

    Science.gov (United States)

    Gámez-Cenzano, Cristina; Pino-Sorroche, Francisco

    2014-04-01

    There is a growing interest in using quantification in FDG-PET/CT in oncology, especially for evaluating response to therapy. Complex full quantitative procedures with blood sampling and dynamic scanning have been clinically replaced by the use of standardized uptake value measurements that provide an index of regional tracer uptake normalized to the administered dose of FDG. Some approaches have been proposed for assessing quantitative metabolic response, such as EORTC and PERCIST criteria in solid tumors. When using standardized uptake value in clinical routine and multicenter trials, standardization of protocols and quality control procedures of instrumentation is required. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Spatial networks

    Science.gov (United States)

    Barthélemy, Marc

    2011-02-01

    Complex systems are very often organized under the form of networks where nodes and edges are embedded in space. Transportation and mobility networks, Internet, mobile phone networks, power grids, social and contact networks, and neural networks, are all examples where space is relevant and where topology alone does not contain all the information. Characterizing and understanding the structure and the evolution of spatial networks is thus crucial for many different fields, ranging from urbanism to epidemiology. An important consequence of space on networks is that there is a cost associated with the length of edges which in turn has dramatic effects on the topological structure of these networks. We will thoroughly explain the current state of our understanding of how the spatial constraints affect the structure and properties of these networks. We will review the most recent empirical observations and the most important models of spatial networks. We will also discuss various processes which take place on these spatial networks, such as phase transitions, random walks, synchronization, navigation, resilience, and disease spread.

  4. Network science

    CERN Document Server

    Barabasi, Albert-Laszlo

    2016-01-01

    Networks are everywhere, from the Internet, to social networks, and the genetic networks that determine our biological existence. Illustrated throughout in full colour, this pioneering textbook, spanning a wide range of topics from physics to computer science, engineering, economics and the social sciences, introduces network science to an interdisciplinary audience. From the origins of the six degrees of separation to explaining why networks are robust to random failures, the author explores how viruses like Ebola and H1N1 spread, and why it is that our friends have more friends than we do. Using numerous real-world examples, this innovatively designed text includes clear delineation between undergraduate and graduate level material. The mathematical formulas and derivations are included within Advanced Topics sections, enabling use at a range of levels. Extensive online resources, including films and software for network analysis, make this a multifaceted companion for anyone with an interest in network sci...

  5. Vulnerability of network of networks

    Science.gov (United States)

    Havlin, S.; Kenett, D. Y.; Bashan, A.; Gao, J.; Stanley, H. E.

    2014-10-01

    Our dependence on networks - be they infrastructure, economic, social or others - leaves us prone to crises caused by the vulnerabilities of these networks. There is a great need to develop new methods to protect infrastructure networks and prevent cascade of failures (especially in cases of coupled networks). Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How, and at which cost can one restructure the network such that it will become more robust against malicious attacks? The gradual increase in attacks on the networks society depends on - Internet, mobile phone, transportation, air travel, banking, etc. - emphasize the need to develop new strategies to protect and defend these crucial networks of communication and infrastructure networks. One example is the threat of liquid explosives a few years ago, which completely shut down air travel for days, and has created extreme changes in regulations. Such threats and dangers warrant the need for new tools and strategies to defend critical infrastructure. In this paper we review recent advances in the theoretical understanding of the vulnerabilities of interdependent networks with and without spatial embedding, attack strategies and their affect on such networks of networks as well as recently developed strategies to optimize and repair failures caused by such attacks.

  6. Structure determination of electrodeposited zinc-nickel alloys: thermal stability and quantification using XRD and potentiodynamic dissolution

    International Nuclear Information System (INIS)

    Fedi, B.; Gigandet, M.P.; Hihn, J-Y; Mierzejewski, S.

    2016-01-01

    Highlights: • Quantification of zinc-nickel phases between 1,2% and 20%. • Coupling XRD to partial potentiodynamic dissolution. • Deconvolution of anodic stripping curves. • Phase quantification after annealing. - Abstract: Electrodeposited zinc-nickel coatings obtained by electrodeposition reveal the presence of metastable phases in various quantities, thus requiring their identification, a study of their thermal stability, and, finally, determination of their respective proportions. By combining XRD measurement with partial potentiodynamic dissolution, anodic peaks were indexed to allow their quantification. Quantification of electrodeposited zinc-nickel alloys approximately 10 μm thick was thus carried out on nickel content between 1.2% and 20%, and exhibited good accuracy. This method was then extended to the same set of alloys after annealing (250 °C, 2 h), thus bringing the structural organization closer to its thermodynamic equilibrium. The result obtained ensures better understanding of crystallization of metastable phases and of phase proportion evolution in a bi-phasic zinc-nickel coating. Finally, the presence of a monophase γ and its thermal stability in the 12% to 15% range provides important information for coating anti-corrosion behavior.

  7. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    Science.gov (United States)

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  8. Pure hydroxyapatite phantoms for the calibration of in vivo X-ray fluorescence systems of bone lead and strontium quantification.

    Science.gov (United States)

    Da Silva, Eric; Kirkham, Brian; Heyd, Darrick V; Pejović-Milić, Ana

    2013-10-01

    Plaster of Paris [poP, CaSO4·(1)/(2) H2O] is the standard phantom material used for the calibration of in vivo X-ray fluorescence (IVXRF)-based systems of bone metal quantification (i.e bone strontium and lead). Calibration of IVXRF systems of bone metal quantification employs the use of a coherent normalization procedure which requires the application of a coherent correction factor (CCF) to the data, calculated as the ratio of the relativistic form factors of the phantom material and bone mineral. Various issues have been raised as to the suitability of poP for the calibration of IVXRF systems of bone metal quantification which include its chemical purity and its chemical difference from bone mineral (a calcium phosphate). This work describes the preparation of a chemically pure hydroxyapatite phantom material, of known composition and stoichiometry, proposed for the purpose of calibrating IVXRF systems of bone strontium and lead quantification as a replacement for poP. The issue with contamination by the analyte was resolved by preparing pure Ca(OH)2 by hydroxide precipitation, which was found to bring strontium and lead levels to bone mineral component of NIST SRM 1486 (bone meal), as determined by powder X-ray diffraction spectrometry.

  9. Atomic force microscopy applied to the quantification of nano-precipitates in thermo-mechanically treated microalloyed steels

    Energy Technology Data Exchange (ETDEWEB)

    Renteria-Borja, Luciano [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Hurtado-Delgado, Eduardo, E-mail: hurtado@itmorelia.edu.mx [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Garnica-Gonzalez, Pedro [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Dominguez-Lopez, Ivan; Garcia-Garcia, Adrian Luis [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada-IPN Unidad Queretaro, Cerro Blanco No. 141, Colinas del Cimatario, 76090 Queretaro (Mexico)

    2012-07-15

    Quantification of nanometer-size precipitates in microalloyed steels has been traditionally performed using transmission electron microscopy (TEM), in spite of its complicated sample preparation procedures, prone to preparation errors and sample perturbation. In contrast to TEM procedures, atomic force microscopy (AFM) is performed on the as-prepared specimen, with sample preparation requirements similar to those for optical microscopy (OM), rendering three-dimensional representations of the sample surface with vertical resolution of a fraction of a nanometer. In AFM, contrast mechanisms are directly related to surface properties such as topography, adhesion, and stiffness, among others. Chemical etching was performed using 0.5% nital, at time intervals between 4 and 20 s, in 4 s steps, until reaching the desired surface finish. For the present application, an average surface-roughness peak-height below 200 nm was sought. Quantification results of nanometric precipitates were obtained from the statistical analysis of AFM images of the microstructure developed by microalloyed Nb and V-Mo steels. Topography and phase contrast AFM images were used for quantification. The results obtained using AFM are consistent with similar TEM reports. - Highlights: Black-Right-Pointing-Pointer We quantified nanometric precipitates in Nb and V-Mo microalloyed steels using AFM. Black-Right-Pointing-Pointer Microstructures of the thermo-mechanically treated microalloyed steels were used. Black-Right-Pointing-Pointer Topography and phase contrast AFM images were used for quantification. Black-Right-Pointing-Pointer AFM results are comparable with traditionally obtained TEM measurements.

  10. Chip-Oriented Fluorimeter Design and Detection System Development for DNA Quantification in Nano-Liter Volumes

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2009-12-01

    Full Text Available The chip-based polymerase chain reaction (PCR system has been developed in recent years to achieve DNA quantification. Using a microstructure and miniature chip, the volume consumption for a PCR can be reduced to a nano-liter. With high speed cycling and a low reaction volume, the time consumption of one PCR cycle performed on a chip can be reduced. However, most of the presented prototypes employ commercial fluorimeters which are not optimized for fluorescence detection of such a small quantity sample. This limits the performance of DNA quantification, especially low experiment reproducibility. This study discusses the concept of a chip-oriented fluorimeter design. Using the analytical model, the current study analyzes the sensitivity and dynamic range of the fluorimeter to fit the requirements for detecting fluorescence in nano-liter volumes. Through the optimized processes, a real-time PCR on a chip system with only one nano-liter volume test sample is as sensitive as the commercial real-time PCR machine using the sample with twenty micro-liter volumes. The signal to noise (S/N ratio of a chip system for DNA quantification with hepatitis B virus (HBV plasmid samples is 3 dB higher. DNA quantification by the miniature chip shows higher reproducibility compared to the commercial machine with respect to samples of initial concentrations from 103 to 105 copies per reaction.

  11. A RP-HPLC method for quantification of diclofenac sodium released from biological macromolecules.

    Science.gov (United States)

    Bhattacharya, Shiv Sankar; Banerjee, Subham; Ghosh, Ashoke Kumar; Chattopadhyay, Pronobesh; Verma, Anurag; Ghosh, Amitava

    2013-07-01

    Interpenetrating network (IPN) microbeads of sodium carboxymethyl locust bean gum (SCMLBG) and sodium carboxymethyl cellulose (SCMC) containing diclofenac sodium (DS), a nonsteroidal anti-inflammatory drug, were prepared by single water-in-water (w/w) emulsion gelation process using AlCl3 as cross-linking agent in a complete aqueous environment. Pharmacokinetic study of these IPN microbeads was then carried out by a simple and feasible high-performance liquid chromatographic method with UV detection which was developed and validated for the quantification of diclofenac sodium in rabbit plasma. The chromatographic separation was carried out in a Hypersil BDS, C18 column (250 mm × 4.6 mm; 5 m). The mobile phase was a mixture of acetonitrile and methanol (70:30, v/v) at a flow rate of 1.0 ml/min. The UV detection was set at 276 nm. The extraction recovery of diclofenac sodium in plasma of three quality control (QC) samples was ranged from 81.52% to 95.29%. The calibration curve was linear in the concentration range of 20-1000 ng/ml with the correlation coefficient (r(2)) above 0.9951. The method was specific and sensitive with the limit of quantification of 20 ng/ml. In stability tests, diclofenac sodium in rabbit plasma was stable during storage and assay procedure. Copyright © 2013. Published by Elsevier B.V.

  12. System resiliency quantification using non-state-space and state-space analytic models

    International Nuclear Information System (INIS)

    Ghosh, Rahul; Kim, DongSeong; Trivedi, Kishor S.

    2013-01-01

    Resiliency is becoming an important service attribute for large scale distributed systems and networks. Key problems in resiliency quantification are lack of consensus on the definition of resiliency and systematic approach to quantify system resiliency. In general, resiliency is defined as the ability of (system/person/organization) to recover/defy/resist from any shock, insult, or disturbance [1]. Many researchers interpret resiliency as a synonym for fault-tolerance and reliability/availability. However, effect of failure/repair on systems is already covered by reliability/availability measures and that of on individual jobs is well covered under the umbrella of performability [2] and task completion time analysis [3]. We use Laprie [4] and Simoncini [5]'s definition in which resiliency is the persistence of service delivery that can justifiably be trusted, when facing changes. The changes we are referring to here are beyond the envelope of system configurations already considered during system design, that is, beyond fault tolerance. In this paper, we outline a general approach for system resiliency quantification. Using examples of non-state-space and state-space stochastic models, we analytically–numerically quantify the resiliency of system performance, reliability, availability and performability measures w.r.t. structural and parametric changes

  13. Quantification of the optical surface reflection and surface roughness of articular cartilage using optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Saarakkala, Simo; Wang Shuzhe; Huang Yanping; Zheng Yongping [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong (China)], E-mail: simo.saarakkala@uku.fi, E-mail: ypzheng@ieee.org

    2009-11-21

    Optical coherence tomography (OCT) is a promising new technique for characterizing the structural changes of articular cartilage in osteoarthritis (OA). The calculation of quantitative parameters from the OCT signal is an important step to develop OCT as an effective diagnostic technique. In this study, two novel parameters for the quantification of optical surface reflection and surface roughness from OCT measurements are introduced: optical surface reflection coefficient (ORC), describing the amount of a ratio of the optical reflection from cartilage surface with respect to that from a reference material, and OCT roughness index (ORI) indicating the smoothness of the cartilage surface. The sensitivity of ORC and ORI to detect changes in bovine articular cartilage samples after enzymatic degradations of collagen and proteoglycans using collagenase and trypsin enzymes, respectively, was tested in vitro. A significant decrease (p < 0.001) in ORC as well as a significant increase (p < 0.001) in ORI was observed after collagenase digestion. After trypsin digestion, no significant changes in ORC or ORI were observed. To conclude, the new parameters introduced were demonstrated to be feasible and sensitive to detect typical OA-like degenerative changes in the collagen network. From the clinical point of view, the quantification of OCT measurements is of great interest since OCT probes have been already miniaturized and applied in patient studies during arthroscopy or open knee surgery in vivo. Further studies are still necessary to demonstrate the clinical capability of the introduced parameters for naturally occurring early OA changes in the cartilage.

  14. Unified broadcast in sensor networks

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg; Jurdak, Raja; Kusy, Branislav

    2011-01-01

    of the network stack. UB is implemented as a transparent layer between the link and network layers, where it delays, schedules, and combines broadcasts from upper layer protocols before transmission on the wireless channel. Our empirical results in simulation and on a testbed show that UB can decrease...... the overall packet transmissions in the network by more than 60%, corresponding to more than 40% energy savings, without requiring new interfaces or affecting the correctness of the upper layer protocols....

  15. Rigid 3D-3D registration of TOF MRA integrating vessel segmentation for quantification of recurrence volumes after coiling cerebral aneurysm

    International Nuclear Information System (INIS)

    Saering, Dennis; Forkert, Nils Daniel; Fiehler, Jens; Ries, Thorsten

    2012-01-01

    A fast and reproducible quantification of the recurrence volume of coiled aneurysms is required to enable a more timely evaluation of new coils. This paper presents two registration schemes for the semi-automatic quantification of aneurysm recurrence volumes based on baseline and follow-up 3D MRA TOF datasets. The quantification of shape changes requires a previous definition of corresponding structures in both datasets. For this, two different rigid registration methods have been developed and evaluated. Besides a state-of-the-art rigid registration method, a second approach integrating vessel segmentations is presented. After registration, the aneurysm recurrence volume can be calculated based on the difference image. The computed volumes were compared to manually extracted volumes. An evaluation based on 20 TOF MRA datasets (baseline and follow-up) of ten patients showed that both registration schemes are generally capable of providing sufficient registration results. Regarding the quantification of aneurysm recurrence volumes, the results suggest that the second segmentation-based registration method yields better results, while a reduction of the computation and interaction time is achieved at the same time. The proposed registration scheme incorporating vessel segmentation enables an improved quantification of recurrence volumes of coiled aneurysms with reduced computation and interaction time. (orig.)

  16. Closure requirements

    International Nuclear Information System (INIS)

    Hutchinson, I.P.G.; Ellison, R.D.

    1992-01-01

    Closure of a waste management unit can be either permanent or temporary. Permanent closure may be due to: economic factors which make it uneconomical to mine the remaining minerals; depletion of mineral resources; physical site constraints that preclude further mining and beneficiation; environmental, regulatory or other requirements that make it uneconomical to continue to develop the resources. Temporary closure can occur for a period of several months to several years, and may be caused by factors such as: periods of high rainfall or snowfall which prevent mining and waste disposal; economic circumstances which temporarily make it uneconomical to mine the target mineral; labor problems requiring a cessation of operations for a period of time; construction activities that are required to upgrade project components such as the process facilities and waste management units; and mine or process plant failures that require extensive repairs. Permanent closure of a mine waste management unit involves the provision of durable surface containment features to protect the waters of the State in the long-term. Temporary closure may involve activities that range from ongoing maintenance of the existing facilities to the installation of several permanent closure features in order to reduce ongoing maintenance. This paper deals with the permanent closure features

  17. Developing A Generic Optical Avionic Network

    DEFF Research Database (Denmark)

    Zhang, Jiang; An, Yi; Berger, Michael Stübert

    2011-01-01

    We propose a generic optical network design for future avionic systems in order to reduce the weight and power consumption of current networks on board. A three-layered network structure over a ring optical network topology is suggested, as it can provide full reconfiguration flexibility...... and support a wide range of avionic applications. Segregation can be made on different hierarchies according to system criticality and security requirements. The structure of each layer is discussed in detail. Two network configurations are presented, focusing on how to support different network services...... by such a network. Finally, three redundancy scenarios are discussed and compared....

  18. Networked Microgrids Scoping Study

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dobriansky, Larisa [General MicroGrids, San Diego, CA (United States); Glover, Steve [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Liu, Chen-Ching [Washington State Univ., Pullman, WA (United States); Looney, Patrick [Brookhaven National Lab. (BNL), Upton, NY (United States); Mashayekh, Salman [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Pratt, Annabelle [National Renewable Energy Lab. (NREL), Golden, CO (United States); Schneider, Kevin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Stadler, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Starke, Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Jianhui [Argonne National Lab. (ANL), Argonne, IL (United States); Yue, Meng [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-12-05

    Much like individual microgrids, the range of opportunities and potential architectures of networked microgrids is very diverse. The goals of this scoping study are to provide an early assessment of research and development needs by examining the benefits of, risks created by, and risks to networked microgrids. At this time there are very few, if any, examples of deployed microgrid networks. In addition, there are very few tools to simulate or otherwise analyze the behavior of networked microgrids. In this setting, it is very difficult to evaluate networked microgrids systematically or quantitatively. At this early stage, this study is relying on inputs, estimations, and literature reviews by subject matter experts who are engaged in individual microgrid research and development projects, i.e., the authors of this study The initial step of the study gathered input about the potential opportunities provided by networked microgrids from these subject matter experts. These opportunities were divided between the subject matter experts for further review. Part 2 of this study is comprised of these reviews. Part 1 of this study is a summary of the benefits and risks identified in the reviews in Part 2 and synthesis of the research needs required to enable networked microgrids.

  19. Urban networks of tomorrow

    International Nuclear Information System (INIS)

    Bothe, D; Kaufmann, T.

    2016-01-01

    The requirements for urban utility grids are subject to a considerable change. The diversification of the energy supply and the changing feed-in structure (central -> decentral) also influence the operation of the existing networks considerably. Therefore, the focus of future studies will be on the flexibility of energy supply and the energy-carrier-wide network analysis or planning. These aspects are addressed, among other things, within the URBEM project, with a focus on a holistic, interdisciplinary approach. On the basis of separately performed thermal and electrical network calculations an optimization task is defined (for example, minimization of operating resources, minimization of CO2 emissions) and solved under technical conditions. The scenarios for the period 2030 and 2050 developed in the URBEM project serve as the basis for the optimization. The results of the calculations show current utilization or bottlenecks in the supply networks as well as optimum future supply structures for development areas in urban areas. (rössner) [de

  20. Broadband network selection issues

    Science.gov (United States)

    Leimer, Michael E.

    1996-01-01

    Selecting the best network for a given cable or telephone company provider is not as obvious as it appears. The cost and performance trades between Hybrid Fiber Coax (HFC), Fiber to the Curb (FTTC) and Asymmetric Digital Subscriber Line networks lead to very different choices based on the existing plant and the expected interactive subscriber usage model. This paper presents some of the issues and trades that drive network selection. The majority of the Interactive Television trials currently underway or planned are based on HFC networks. As a throw away market trial or a short term strategic incursion into a cable market, HFC may make sense. In the long run, if interactive services see high demand, HFC costs per node and an ever shrinking neighborhood node size to service large numbers of subscribers make FTTC appear attractive. For example, thirty-three 64-QAM modulators are required to fill the 550 MHz to 750 MHz spectrum with compressed video streams in 6 MHz channels. This large amount of hardware at each node drives not only initial build-out costs, but operations and maintenance costs as well. FTTC, with its potential for digitally switching large amounts of bandwidth to an given home, offers the potential to grow with the interactive subscriber base with less downstream cost. Integrated telephony on these networks is an issue that appears to be an afterthought for most of the networks being selected at the present time. The major players seem to be videocentric and include telephony as a simple add-on later. This may be a reasonable view point for the telephone companies that plan to leave their existing phone networks untouched. However, a phone company planning a network upgrade or a cable company jumping into the telephony business needs to carefully weigh the cost and performance issues of the various network choices. Each network type provides varying capability in both upstream and downstream bandwidth for voice channels. The noise characteristics