WorldWideScience

Sample records for networks requires quantification

  1. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  2. Future Home Network Requirements

    DEFF Research Database (Denmark)

    Charbonnier, Benoit; Wessing, Henrik; Lannoo, Bart

    This paper presents the requirements for future Home Area Networks (HAN). Firstly, we discuss the applications and services as well as their requirements. Then, usage scenarios are devised to establish a first specification for the HAN. The main requirements are an increased bandwidth (towards 1...

  3. NP Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Rotman, Lauren [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States); Tierney, Brian [Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States)

    2011-08-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. To support SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2011, ESnet and the Office of Nuclear Physics (NP), of the DOE SC, organized a workshop to characterize the networking requirements of the programs funded by NP. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  4. BES Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Biocca, Alan; Carlson, Rich; Chen, Jackie; Cotter, Steve; Tierney, Brian; Dattoria, Vince; Davenport, Jim; Gaenko, Alexander; Kent, Paul; Lamm, Monica; Miller, Stephen; Mundy, Chris; Ndousse, Thomas; Pederson, Mark; Perazzo, Amedeo; Popescu, Razvan; Rouson, Damian; Sekine, Yukiko; Sumpter, Bobby; Dart, Eli; Wang, Cai-Zhuang -Z; Whitelam, Steve; Zurawski, Jason

    2011-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivityfor the US Department of Energy Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of the Office ofScience programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years.

  5. BER Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Alapaty, Kiran; Allen, Ben; Bell, Greg; Benton, David; Brettin, Tom; Canon, Shane; Dart, Eli; Cotter, Steve; Crivelli, Silvia; Carlson, Rich; Dattoria, Vince; Desai, Narayan; Egan, Richard; Tierney, Brian; Goodwin, Ken; Gregurick, Susan; Hicks, Susan; Johnston, Bill; de Jong, Bert; Kleese van Dam, Kerstin; Livny, Miron; Markowitz, Victor; McGraw, Jim; McCord, Raymond; Oehmen, Chris; Regimbal, Kevin; Shipman, Galen; Strand, Gary; Flick, Jeff; Turnbull, Susan; Williams, Dean; Zurawski, Jason

    2010-11-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2010 ESnet and the Office of Biological and Environmental Research, of the DOE Office of Science, organized a workshop to characterize the networking requirements of the science programs funded by BER. The requirements identified at the workshop are summarized and described in more detail in the case studies and the Findings section. A number of common themes emerged from the case studies and workshop discussions. One is that BER science, like many other disciplines, is becoming more and more distributed and collaborative in nature. Another common theme is that data set sizes are exploding. Climate Science in particular is on the verge of needing to manage exabytes of data, and Genomics is on the verge of a huge paradigm shift in the number of sites with sequencers and the amount of sequencer data being generated.

  6. ASCR Science Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Tierney, Brian

    2009-08-24

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In April 2009 ESnet and the Office of Advanced Scientific Computing Research (ASCR), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by ASCR. The ASCR facilities anticipate significant increases in wide area bandwidth utilization, driven largely by the increased capabilities of computational resources and the wide scope of collaboration that is a hallmark of modern science. Many scientists move data sets between facilities for analysis, and in some cases (for example the Earth System Grid and the Open Science Grid), data distribution is an essential component of the use of ASCR facilities by scientists. Due to the projected growth in wide area data transfer needs, the ASCR supercomputer centers all expect to deploy and use 100 Gigabit per second networking technology for wide area connectivity as soon as that deployment is financially feasible. In addition to the network connectivity that ESnet provides, the ESnet Collaboration Services (ECS) are critical to several science communities. ESnet identity and trust services, such as the DOEGrids certificate authority, are widely used both by the supercomputer centers and by collaborations such as Open Science Grid (OSG) and the Earth System Grid (ESG). Ease of use is a key determinant of the scientific utility of network-based services. Therefore, a key enabling aspect for scientists beneficial use of high

  7. Fusion Energy Sciences Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli [ESNet, Berkeley, CA (United States); Tierney, Brian [ESNet, Berkeley, CA (United States)

    2012-09-26

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In December 2011, ESnet and the Office of Fusion Energy Sciences (FES), of the DOE Office of Science (SC), organized a workshop to characterize the networking requirements of the programs funded by FES. The requirements identified at the workshop are summarized in the Findings section, and are described in more detail in the body of the report.

  8. Pediatric Nutritional Requirements Determination with Neural Networks

    OpenAIRE

    Karlık, Bekir; Ece, Aydın

    1998-01-01

    To calculate daily nutritional requirements of children, a computer program has been developed based upon neural network. Three parameters, daily protein, energy and water requirements, were calculated through trained artificial neural networks using a database of 312 children The results were compared with those of calculated from dietary requirements tables of World Health Organisation. No significant difference was found between two calculations. In conclusion, a simple neural network may ...

  9. Biological and Environmental Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Balaji, V. [Princeton Univ., NJ (United States). Earth Science Grid Federation (ESGF); Boden, Tom [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cowley, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dart, Eli [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Dattoria, Vince [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Desai, Narayan [Argonne National Lab. (ANL), Argonne, IL (United States); Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Foster, Ian [Argonne National Lab. (ANL), Argonne, IL (United States); Goldstone, Robin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gregurick, Susan [U.S. Dept. of Energy, Washington, DC (United States). Biological Systems Science Division; Houghton, John [U.S. Dept. of Energy, Washington, DC (United States). Biological and Environmental Research (BER) Program; Izaurralde, Cesar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnston, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Joseph, Renu [U.S. Dept. of Energy, Washington, DC (United States). Climate and Environmental Sciences Division; Kleese-van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lipton, Mary [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Monga, Inder [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Pritchard, Matt [British Atmospheric Data Centre (BADC), Oxon (United Kingdom); Rotman, Lauren [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Strand, Gary [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Stuart, Cory [Argonne National Lab. (ANL), Argonne, IL (United States); Tatusova, Tatiana [National Inst. of Health (NIH), Bethesda, MD (United States); Tierney, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Thomas, Brian [Univ. of California, Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zurawski, Jason [Internet2, Washington, DC (United States)

    2013-09-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

  10. Requirements for a network storage service

    Science.gov (United States)

    Kelly, Suzanne M.; Haynes, Rena A.

    1992-01-01

    Sandia National Laboratories provides a high performance classified computer network as a core capability in support of its mission of nuclear weapons design and engineering, physical sciences research, and energy research and development. The network, locally known as the Internal Secure Network (ISN), was designed in 1989 and comprises multiple distributed local area networks (LAN's) residing in Albuquerque, New Mexico and Livermore, California. The TCP/IP protocol suite is used for inner-node communications. Scientific workstations and mid-range computers, running UNIX-based operating systems, compose most LAN's. One LAN, operated by the Sandia Corporate Computing Directorate, is a general purpose resource providing a supercomputer and a file server to the entire ISN. The current file server on the supercomputer LAN is an implementation of the Common File System (CFS) developed by Los Alamos National Laboratory. Subsequent to the design of the ISN, Sandia reviewed its mass storage requirements and chose to enter into a competitive procurement to replace the existing file server with one more adaptable to a UNIX/TCP/IP environment. The requirements study for the network was the starting point for the requirements study for the new file server. The file server is called the Network Storage Services (NSS) and is requirements are described in this paper. The next section gives an application or functional description of the NSS. The final section adds performance, capacity, and access constraints to the requirements.

  11. Quantification of De-anonymization Risks in Social Networks

    OpenAIRE

    Lee, Wei-Han; Liu, Changchang; Ji, Shouling; Mittal, Prateek; Lee, Ruby

    2017-01-01

    The risks of publishing privacy-sensitive data have received considerable attention recently. Several de-anonymization attacks have been proposed to re-identify individuals even if data anonymization techniques were applied. However, there is no theoretical quantification for relating the data utility that is preserved by the anonymization techniques and the data vulnerability against de-anonymization attacks. In this paper, we theoretically analyze the de-anonymization attacks and provide co...

  12. HEP Science Network Requirements--Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bakken, Jon; Barczyk, Artur; Blatecky, Alan; Boehnlein, Amber; Carlson, Rich; Chekanov, Sergei; Cotter, Steve; Cottrell, Les; Crawford, Glen; Crawford, Matt; Dart, Eli; Dattoria, Vince; Ernst, Michael; Fisk, Ian; Gardner, Rob; Johnston, Bill; Kent, Steve; Lammel, Stephan; Loken, Stewart; Metzger, Joe; Mount, Richard; Ndousse-Fetter, Thomas; Newman, Harvey; Schopf, Jennifer; Sekine, Yukiko; Stone, Alan; Tierney, Brian; Tull, Craig; Zurawski, Jason

    2010-04-27

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans

  13. Digital video technologies and their network requirements

    Energy Technology Data Exchange (ETDEWEB)

    R. P. Tsang; H. Y. Chen; J. M. Brandt; J. A. Hutchins

    1999-11-01

    Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the various coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.

  14. Soil Pore Network Visualisation and Quantification using ImageJ

    DEFF Research Database (Denmark)

    Garbout, Amin; Pajor, Radoslaw; Otten, Wilfred

    Computed Tomography data. We used ImageJ to analyze images of pore geometries in soils generated by X-ray micro Computed Tomography. Soil samples were scanned at 30 μm resolution, and we produced replicated samples with different pore geometries by packing different sized soil aggregates at pre...... in the input image. Several parameters (number of networks, junctions, branches…) were used to describe the networks in the sample, and we discuss how these can be used to describe soil structure. Keywords Soil,networks,pore,X-ray micro Computed Tomography,Skeletonize3D...... strategies to preserve this limited resource. Many of those processes occur at micro scales. For long our ability to study soils non-destructively at microscopic scales has been limited, but recent developments in the use of X-ray Computed Tomography has offered great opportunities to quantify the 3-D...

  15. A neural network approach for fast, automated quantification of DIR performance.

    Science.gov (United States)

    Neylon, John; Min, Yugang; Low, Daniel A; Santhanam, Anand

    2017-08-01

    model able to predict the target registration error (TRE) for given ISM values. The cost function for sub-volumes enclosing critical radiotherapy structures in the head-and-neck region were computed and compared with the ground truth TRE values. When examining different combinations of registration parameters for a single DIR, the neural network was able to quantify DIR error to within a single voxel for 95% of the sub-volumes examined. In addition, correlations between the neural network predicted error and the ground-truth TRE for the Planning Target Volume and the parotid contours were consistently observed to be > 0.9. For variations in posture and tumor regression for 10 different patients, patient-specific neural networks predicted the TRE to within a single voxel > 90% on average. The formulation presented in this paper demonstrates the ability for fast, accurate quantification of registration performance. DNN provided the necessary level of abstraction to estimate a quantified TRE from the ISM expectations described above, when sufficiently trained on annotated data. In addition, biomechanical models facilitated the DNN with the required variations in the patient posture and physiological regression. With further development and validation on clinical patient data, such networks have potential impact in patient and site-specific optimization, and stream-lining clinical registration validation. © 2017 American Association of Physicists in Medicine.

  16. Science-Driven Network Requirements for ESnet

    Energy Technology Data Exchange (ETDEWEB)

    Adams, Paul [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Canon, Shane [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States).; Carter, Steven [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States).; Dart, Eli [ESNet, Berkeley, CA (United States); Draney, Brent [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Greenwald, Martin [MIT (Massachusetts Inst. of Technology), Cambridge, MA (United States); Hodges, Jason [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States).; Lauret, Jerome [Brookhaven National Lab. (BNL), Upton, NY (United States); Michaels, George [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rahn, Larry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schissel, David [General Atomics, San Diego, CA (United States); Strand, Gary [National Center for Atmospheric Reserch, Boulder, CO (United States); Walter, Howard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wehner, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2006-02-21

    The Energy Sciences Network (ESnet) is the primary providerof network connectivity for the US Department of Energy Office ofScience, the single largest supporter of basic research in the physicalsciences in the United States. In support of the Office of Scienceprograms, ESnet regularly updates and refreshes its understanding of thenetworking requirements of the instruments, facilities and scientiststhat it serves. This focus has helped ESnet to be a highly successfulenabler of scientific discovery for over 20 years. In August, 2002 theDOE Office of Science organized a workshop to characterize the networkingrequirements for Office of Science programs. Networking and middlewarerequirements were solicited from a representative group of scienceprograms. The workshop was summarized in two documents the workshop finalreport and a set of appendixes. This document updates the networkingrequirements for ESnet as put forward by the science programs listed inthe 2002 workshop report. In addition, three new programs have beenadded. Theinformation was gathered through interviews with knowledgeablescientists in each particular program or field.

  17. Advanced Scientific Computing Research Network Requirements: ASCR Network Requirements Review Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Charles [Argonne National Lab. (ANL), Argonne, IL (United States); Bell, Greg [ESnet, Berkeley, CA (United States); Canon, Shane [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [ESnet, Berkeley, CA (United States); Dattoria, Vince [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Goodwin, Dave [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Lee, Jason [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hicks, Susan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Holohan, Ed [Argonne National Lab. (ANL), Argonne, IL (United States); Klasky, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Lauzon, Carolyn [Dept. of Energy (DOE), Washington DC (United States). Office of Science. Advanced Scientific Computing Research (ASCR); Rogers, Jim [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Skinner, David [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Tierney, Brian [ESnet, Berkeley, CA (United States)

    2013-03-08

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In October 2012, ESnet and the Office of Advanced Scientific Computing Research (ASCR) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the ASCR program office. The requirements identified at the review are summarized in the Findings section, and are described in more detail in the body of the report.

  18. Requirements for a Network Storage Service in a supercomputer environment

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, S.M.

    1991-09-26

    Sandia National Laboratories has completed a requirements study for a networked mass storage system. The areas of user functionality, network connectivity, and performance were analyzed to determine specifications for a Network Storage Service to operate in supercomputer environment. 4 refs.

  19. VESGEN 2D: Automated, User-Interactive Software for Vascular Quantification and Mapping of Angiogenic and Lymphangiogenic Trees and Networks

    Science.gov (United States)

    Vickerman, Mary B.; Keith, Patricia A.; McKay, Terri L.; Gedeon, Dan J.; Watanabe, Michiko; Montano, Monica; Karunamuni, Ganga; Kaiser, Peter K.; Sears, Jonathan E.; Ebrahem, Quteba; Ribita, Daniela; Hylton, Alan G.; Parsons-Wingerter, Patricia

    2010-01-01

    Quantification of microvascular remodeling as a meaningful discovery tool requires mapping and measurement of site-specific changes within vascular trees and networks. Vessel density and other critical vascular parameters are often modulated by molecular regulators as determined by local vascular architecture. For example, enlargement of vessel diameter by vascular endothelial growth factor (VEGF) is restricted to specific generations of vessel branching (Microvascular Research 72(3):91, 2006). The averaging of vessel diameter over many successively smaller generations is therefore not particularly useful. The newly automated, user-interactive software VESGEN (VESsel GENeration Analysis) quantifies major vessel parameters within two-dimensional (2D) vascular trees, networks, and tree-network composites. This report reviews application of VESGEN 2D to angiogenic and lymphangiogenic tissues that includes the human and murine retina, embryonic coronary vessels, and avian chorioallantoic membrane (CAM). Software output includes colorized image maps with quantification of local vessel diameter, fractal dimension, tortuosity and avascular spacing. The density of parameters such as vessel area, length, number and branch point are quantified according to site-specific generational branching within vascular trees. The sole user input requirement is a binary (black/white) vascular image. Future applications of VESGEN will include analysis of 3D vascular architecture and bioinformatic dimensions such as blood flow and receptor localization. Branching analysis by VESGEN has demonstrated that numerous regulators including VEGF165, basic fibroblast growth factor (bFGF), transforming growth factor β-1 (TGFβ-1), angiostatin and the clinical steroid triamcinolone acetonide induce ‘fingerprint’ or ‘signature’ changes in vascular patterning that provide unique readouts of dominant molecular signaling. PMID:19248164

  20. Belle-II Experiment Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Asner, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Bell, Greg [ESnet; Carlson, Tim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Cowley, David [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Dart, Eli [ESnet; Erwin, Brock [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Godang, Romulus [Univ. of South Alabama, Mobile, AL (United States); Hara, Takanori [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Johnson, Jerry [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Johnson, Ron [Univ. of Washington, Seattle, WA (United States); Johnston, Bill [ESnet; Dam, Kerstin Kleese-van [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Kaneko, Toshiaki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Kubota, Yoshihiro [NII; Kuhr, Thomas [Karlsruhe Inst. of Technology (KIT) (Germany); McCoy, John [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Miyake, Hideki [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Monga, Inder [ESnet; Nakamura, Motonori [NII; Piilonen, Leo [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Pordes, Ruth [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ray, Douglas [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Russell, Richard [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schram, Malachi [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Schroeder, Jim [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Sevior, Martin [Univ. of Melbourne (Australia); Singh, Surya [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Suzuki, Soh [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Sasaki, Takashi [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan); Williams, Jim [Indiana Univ., Bloomington, IN (United States)

    2013-05-28

    The Belle experiment, part of a broad-based search for new physics, is a collaboration of ~400 physicists from 55 institutions across four continents. The Belle detector is located at the KEKB accelerator in Tsukuba, Japan. The Belle detector was operated at the asymmetric electron-positron collider KEKB from 1999-2010. The detector accumulated more than 1 ab-1 of integrated luminosity, corresponding to more than 2 PB of data near 10 GeV center-of-mass energy. Recently, KEK has initiated a $400 million accelerator upgrade to be called SuperKEKB, designed to produce instantaneous and integrated luminosity two orders of magnitude greater than KEKB. The new international collaboration at SuperKEKB is called Belle II. The first data from Belle II/SuperKEKB is expected in 2015. In October 2012, senior members of the Belle-II collaboration gathered at PNNL to discuss the computing and neworking requirements of the Belle-II experiment with ESnet staff and other computing and networking experts. The day-and-a-half-long workshop characterized the instruments and facilities used in the experiment, the process of science for Belle-II, and the computing and networking equipment and configuration requirements to realize the full scientific potential of the collaboration's work.

  1. Quantification of soil pore network complexity with X-ray computed tomography and gas transport measurements

    DEFF Research Database (Denmark)

    Katuwal, Sheela; Arthur, Emmanuel; Tuller, M.

    2015-01-01

    different soils subjected to 22 mo of field regeneration were quantified with X-ray computed tomography (CT) and compared with functional pore characteristics estimated from measurements of air permeability and gas diffusivity. Furthermore, predictive models for air permeability and gas diffusivity were......Flow and transport of gases through soils are largely controlled by pore structural attributes. The quantification of pore network characteristics is therefore essential for accurate prediction of air permeability and gas diffusivity. In this study, the pore network characteristics of seven...... developed based on CT-derived structural parameters and compared with previously proposed predictive models. Strong correlations between functional and pore geometry parameters were observed. The consideration of CT-derived air-filled porosity, pore network tortuosity and connectivity, and minimum...

  2. What would dense atmospheric observation networks bring to the quantification of city CO2 emissions?

    Science.gov (United States)

    Wu, Lin; Broquet, Grégoire; Ciais, Philippe; Bellassen, Valentin; Vogel, Felix; Chevallier, Frédéric; Xueref-Remy, Irène; Wang, Yilong

    2016-06-01

    Cities currently covering only a very small portion ( land surface directly release to the atmosphere about 44 % of global energy-related CO2, but they are associated with 71-76 % of CO2 emissions from global final energy use. Although many cities have set voluntary climate plans, their CO2 emissions are not evaluated by the monitoring, reporting, and verification (MRV) procedures that play a key role for market- or policy-based mitigation actions. Here we analyze the potential of a monitoring tool that could support the development of such procedures at the city scale. It is based on an atmospheric inversion method that exploits inventory data and continuous atmospheric CO2 concentration measurements from a network of stations within and around cities to estimate city CO2 emissions. This monitoring tool is configured for the quantification of the total and sectoral CO2 emissions in the Paris metropolitan area (˜ 12 million inhabitants and 11.4 TgC emitted in 2010) during the month of January 2011. Its performances are evaluated in terms of uncertainty reduction based on observing system simulation experiments (OSSEs). They are analyzed as a function of the number of sampling sites (measuring at 25 m a.g.l.) and as a function of the network design. The instruments presently used to measure CO2 concentrations at research stations are expensive (typically ˜ EUR 50 k per sensor), which has limited the few current pilot city networks to around 10 sites. Larger theoretical networks are studied here to assess the potential benefit of hypothetical operational lower-cost sensors. The setup of our inversion system is based on a number of diagnostics and assumptions from previous city-scale inversion experiences with real data. We find that, given our assumptions underlying the configuration of the OSSEs, with 10 stations only the uncertainty for the total city CO2 emission during 1 month is significantly reduced by the inversion by ˜ 42 %. It can be further reduced by

  3. Advanced communication and network requirements in Europe

    DEFF Research Database (Denmark)

    Falch, Morten; Enemark, Rasmus

    The report address diffusion of new tele-application, focusing on potential use and potential tele-trafic genrated as a consequense. The applications investigated are: Teleworking, distance learning, research and university network, applications aimed at SMEs, health networks, a trans European pu...... public administation network, city information highway, road-trafic manegement, air traffic control and electronic quotation....

  4. Network science quantification of resilience demonstrated on the Indian Railways Network

    CERN Document Server

    Bhatia, Udit; Kodra, Evan; Ganguly, Auroop R

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power grids and transportation to ecology, climate, biology and even human communities and the Internet, have been examined through network science. While the response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science-based quantitative methods framework for measuring, comparing and interpreting hazard responses and as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. The methods are demonstrated through the resilience of the network to natural or human-induced hazards and electric grid failure. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario. Multiple metrics are used to generate various recovery strateg...

  5. Concept and QoS requirements in 5G networks

    OpenAIRE

    Tikhvinskiy, Valery; Bochechka, Grigory

    2014-01-01

    In this article the requirements to some number of KPI that determine the quality of service in 5G networks are formulated. The proposed QoS requirements are based on the analysis of functional requirements to 5G networks and traffic parameters for HD video and massive M2M services which will the highly demanded in 2020. One of the 5G development paradigms is the virtualization of network functions (VFN) including cloud radio access network and cloud core network. The authors have proposed th...

  6. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Science.gov (United States)

    Bhatia, Udit; Kumar, Devashish; Kodra, Evan; Ganguly, Auroop R

    2015-01-01

    The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  7. Network Science Based Quantification of Resilience Demonstrated on the Indian Railways Network.

    Directory of Open Access Journals (Sweden)

    Udit Bhatia

    Full Text Available The structure, interdependence, and fragility of systems ranging from power-grids and transportation to ecology, climate, biology and even human communities and the Internet have been examined through network science. While response to perturbations has been quantified, recovery strategies for perturbed networks have usually been either discussed conceptually or through anecdotal case studies. Here we develop a network science based quantitative framework for measuring, comparing and interpreting hazard responses as well as recovery strategies. The framework, motivated by the recently proposed temporal resilience paradigm, is demonstrated with the Indian Railways Network. Simulations inspired by the 2004 Indian Ocean Tsunami and the 2012 North Indian blackout as well as a cyber-physical attack scenario illustrate hazard responses and effectiveness of proposed recovery strategies. Multiple metrics are used to generate various recovery strategies, which are simply sequences in which system components should be recovered after a disruption. Quantitative evaluation of these strategies suggests that faster and more efficient recovery is possible through network centrality measures. Optimal recovery strategies may be different per hazard, per community within a network, and for different measures of partial recovery. In addition, topological characterization provides a means for interpreting the comparative performance of proposed recovery strategies. The methods can be directly extended to other Large-Scale Critical Lifeline Infrastructure Networks including transportation, water, energy and communications systems that are threatened by natural or human-induced hazards, including cascading failures. Furthermore, the quantitative framework developed here can generalize across natural, engineered and human systems, offering an actionable and generalizable approach for emergency management in particular as well as for network resilience in general.

  8. A microsensor array for quantification of lubricant contaminants using a back propagation artificial neural network

    Science.gov (United States)

    Zhu, Xiaoliang; Du, Li; Liu, Bendong; Zhe, Jiang

    2016-06-01

    We present a method based on an electrochemical sensor array and a back propagation artificial neural network for detection and quantification of four properties of lubrication oil, namely water (0, 500 ppm, 1000 ppm), total acid number (TAN) (13.1, 13.7, 14.4, 15.6 mg KOH g-1), soot (0, 1%, 2%, 3%) and sulfur content (1.3%, 1.37%, 1.44%, 1.51%). The sensor array, consisting of four micromachined electrochemical sensors, detects the four properties with overlapping sensitivities. A total set of 36 oil samples containing mixtures of water, soot, and sulfuric acid with different concentrations were prepared for testing. The sensor array’s responses were then divided to three sets: training sets (80% data), validation sets (10%) and testing sets (10%). Several back propagation artificial neural network architectures were trained with the training and validation sets; one architecture with four input neurons, 50 and 5 neurons in the first and second hidden layer, and four neurons in the output layer was selected. The selected neural network was then tested using the four sets of testing data (10%). Test results demonstrated that the developed artificial neural network is able to quantitatively determine the four lubrication properties (water, TAN, soot, and sulfur content) with a maximum prediction error of 18.8%, 6.0%, 6.7%, and 5.4%, respectively, indicting a good match between the target and predicted values. With the developed network, the sensor array could be potentially used for online lubricant oil condition monitoring.

  9. The effect of grid size on the quantification of erosion, deposition, and rill network

    Directory of Open Access Journals (Sweden)

    Xiaoyu Lu

    2017-09-01

    Full Text Available Hillslope rill/interrill erosion has been investigated from the perspective of runoff transport of sediment. Recent advances in terrestrial laser scanning can provide high-resolution elevation data up to centimeter levels, and temporal digital elevation models (DEMs enabled the detection and quantification of sediment redistribution. Erosion and deposition are spatially heterogeneous across hillslopes, and the choice of resolution is critical when using a DEM to study the spatial pattern of the processes. This study investigates the influence of grid size on the sediment change calculation and rill network delineation based on two surveys using a terrestrial laser scanner on a hillslope with well-developed rills in 2014 and 2015. Temporal DEMs were used to quantify elevation changes and used to delineate rill networks. We produced DEM pairs of incremental grid sizes (1-cm, 2-cm, 5-cm, 8-cm, 10-cm, 15-cm, 20-cm, and 30-cm for DEM difference and rill network delineation. We used the 1-cm DEM as the reference to compare the results produced from other DEMs. Our results suggest that erosion mainly occurs on the rill sidewalls, and deposition on the rill floors, with patches of erosion/deposition within the interrill areas. Both the area and volume of detectable change decrease as the grid size increases, while the area and volume of erosion are less sensitive compared to those of deposition. The total length and number of rills decrease with the increased grid size, whereas the average length of rills increases. The mean offset between delineated rill network and the reference increases with larger grid sizes. In contrast to the erosion and deposition detected within rills, minor changes are detected on the interrill areas, indicating that either no topographic changes occurred or the changes were too small to be detected on the interill areas by our finest 1-cm DEMs. We recommend to use the finest possible grid size that can be achieved for future

  10. Transcriptional regulatory network refinement and quantification through kinetic modeling, gene expression microarray data and information theory

    Science.gov (United States)

    Sayyed-Ahmad, Abdallah; Tuncay, Kagan; Ortoleva, Peter J

    2007-01-01

    Background Gene expression microarray and other multiplex data hold promise for addressing the challenges of cellular complexity, refined diagnoses and the discovery of well-targeted treatments. A new approach to the construction and quantification of transcriptional regulatory networks (TRNs) is presented that integrates gene expression microarray data and cell modeling through information theory. Given a partial TRN and time series data, a probability density is constructed that is a functional of the time course of transcription factor (TF) thermodynamic activities at the site of gene control, and is a function of mRNA degradation and transcription rate coefficients, and equilibrium constants for TF/gene binding. Results Our approach yields more physicochemical information that compliments the results of network structure delineation methods, and thereby can serve as an element of a comprehensive TRN discovery/quantification system. The most probable TF time courses and values of the aforementioned parameters are obtained by maximizing the probability obtained through entropy maximization. Observed time delays between mRNA expression and activity are accounted for implicitly since the time course of the activity of a TF is coupled by probability functional maximization, and is not assumed to be proportional to expression level of the mRNA type that translates into the TF. This allows one to investigate post-translational and TF activation mechanisms of gene regulation. Accuracy and robustness of the method are evaluated. A kinetic formulation is used to facilitate the analysis of phenomena with a strongly dynamical character while a physically-motivated regularization of the TF time course is found to overcome difficulties due to omnipresent noise and data sparsity that plague other methods of gene expression data analysis. An application to Escherichia coli is presented. Conclusion Multiplex time series data can be used for the construction of the network of

  11. Transcriptional regulatory network refinement and quantification through kinetic modeling, gene expression microarray data and information theory

    Directory of Open Access Journals (Sweden)

    Tuncay Kagan

    2007-01-01

    Full Text Available Abstract Background Gene expression microarray and other multiplex data hold promise for addressing the challenges of cellular complexity, refined diagnoses and the discovery of well-targeted treatments. A new approach to the construction and quantification of transcriptional regulatory networks (TRNs is presented that integrates gene expression microarray data and cell modeling through information theory. Given a partial TRN and time series data, a probability density is constructed that is a functional of the time course of transcription factor (TF thermodynamic activities at the site of gene control, and is a function of mRNA degradation and transcription rate coefficients, and equilibrium constants for TF/gene binding. Results Our approach yields more physicochemical information that compliments the results of network structure delineation methods, and thereby can serve as an element of a comprehensive TRN discovery/quantification system. The most probable TF time courses and values of the aforementioned parameters are obtained by maximizing the probability obtained through entropy maximization. Observed time delays between mRNA expression and activity are accounted for implicitly since the time course of the activity of a TF is coupled by probability functional maximization, and is not assumed to be proportional to expression level of the mRNA type that translates into the TF. This allows one to investigate post-translational and TF activation mechanisms of gene regulation. Accuracy and robustness of the method are evaluated. A kinetic formulation is used to facilitate the analysis of phenomena with a strongly dynamical character while a physically-motivated regularization of the TF time course is found to overcome difficulties due to omnipresent noise and data sparsity that plague other methods of gene expression data analysis. An application to Escherichia coli is presented. Conclusion Multiplex time series data can be used for the

  12. Modeling Irrigation Networks for the Quantification of Potential Energy Recovering: A Case Study

    Directory of Open Access Journals (Sweden)

    Modesto Pérez-Sánchez

    2016-06-01

    Full Text Available Water irrigation systems are required to provide adequate pressure levels in any sort of network. Quite frequently, this requirement is achieved by using pressure reducing valves (PRVs. Nevertheless, the possibility of using hydraulic machines to recover energy instead of PRVs could reduce the energy footprint of the whole system. In this research, a new methodology is proposed to help water managers quantify the potential energy recovering of an irrigation water network with adequate conditions of topographies distribution. EPANET has been used to create a model based on probabilities of irrigation and flow distribution in real networks. Knowledge of the flows and pressures in the network is necessary to perform an analysis of economic viability. Using the proposed methodology, a case study has been analyzed in a typical Mediterranean region and the potential available energy has been estimated. The study quantifies the theoretical energy recoverable if hydraulic machines were installed in the network. Particularly, the maximum energy potentially recovered in the system has been estimated up to 188.23 MWh/year with a potential saving of non-renewable energy resources (coal and gas of CO2 137.4 t/year.

  13. Performance requirements for integrated voice/data networks

    Science.gov (United States)

    Gruber, J. G.; Le, N. H.

    1983-12-01

    This paper addresses top-down end-to-end user-oriented performance requirements pertaining primarily to voice and digital data services. The discussion of requirements for voice parameters accounts for the performance of existing analog and mixed analog/digital networks, as well as the likely effects on performance of short, medium, and long term evolution toward the ultimate all digital ISDN. The requirements for digital data parameters necessarily reflect an evolutionary process which is less consistent than for voice, and therefore these requirements are less definitive in nature. The discussions of voice and digital data performance apply largely to a wide variety of appropriate network designs, transmission schemes, and switching architectures. Both traditional parameters, as well as contemporary parameters associated with new and evolving systems, are considered. The emphasis is on the performance of nation-wide public and private networks, but the paper also considers the constraints of international connections.

  14. Quantification of whey in fluid milk using confocal Raman microscopy and artificial neural network.

    Science.gov (United States)

    Alves da Rocha, Roney; Paiva, Igor Moura; Anjos, Virgílio; Furtado, Marco Antônio Moreira; Bell, Maria José Valenzuela

    2015-06-01

    In this work, we assessed the use of confocal Raman microscopy and artificial neural network as a practical method to assess and quantify adulteration of fluid milk by addition of whey. Milk samples with added whey (from 0 to 100%) were prepared, simulating different levels of fraudulent adulteration. All analyses were carried out by direct inspection at the light microscope after depositing drops from each sample on a microscope slide and drying them at room temperature. No pre- or posttreatment (e.g., sample preparation or spectral correction) was required in the analyses. Quantitative determination of adulteration was performed through a feed-forward artificial neural network (ANN). Different ANN configurations were evaluated based on their coefficient of determination (R2) and root mean square error values, which were criteria for selecting the best predictor model. In the selected model, we observed that data from both training and validation subsets presented R2>99.99%, indicating that the combination of confocal Raman microscopy and ANN is a rapid, simple, and efficient method to quantify milk adulteration by whey. Because sample preparation and postprocessing of spectra were not required, the method has potential applications in health surveillance and food quality monitoring. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Requirements and Algorithms for Cooperation of Heterogeneous Radio Access Networks

    DEFF Research Database (Denmark)

    Mihovska, Albena D.; Tragos, Elias; Mino, Emilio

    2009-01-01

    This paper defines the requirements for cooperation of heterogeneous radio access networks (RANs) and proposes a novel radio resource management (RRM) framework for support of mobility and quality of service (QoS) in a heterogeneous communication environment comprising IMT-Advanced and legacy...

  16. Quantification of fructo-oligosaccharides based on the evaluation of oligomer ratios using an artificial neural network

    Energy Technology Data Exchange (ETDEWEB)

    Onofrejova, Lucia; Farkova, Marta [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic); Preisler, Jan, E-mail: preisler@chemi.muni.cz [Department of Chemistry, Faculty of Science, Masaryk University, Kotlarska 2, 611 37 Brno (Czech Republic)

    2009-04-13

    The application of an internal standard in quantitative analysis is desirable in order to correct for variations in sample preparation and instrumental response. In mass spectrometry of organic compounds, the internal standard is preferably labelled with a stable isotope, such as {sup 18}O, {sup 15}N or {sup 13}C. In this study, a method for the quantification of fructo-oligosaccharides using matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI TOF MS) was proposed and tested on raftilose, a partially hydrolysed inulin with a degree of polymeration 2-7. A tetraoligosaccharide nystose, which is chemically identical to the raftilose tetramer, was used as an internal standard rather than an isotope-labelled analyte. Two mathematical approaches used for data processing, conventional calculations and artificial neural networks (ANN), were compared. The conventional data processing relies on the assumption that a constant oligomer dispersion profile will change after the addition of the internal standard and some simple numerical calculations. On the other hand, ANN was found to compensate for a non-linear MALDI response and variations in the oligomer dispersion profile with raftilose concentration. As a result, the application of ANN led to lower quantification errors and excellent day-to-day repeatability compared to the conventional data analysis. The developed method is feasible for MS quantification of raftilose in the range of 10-750 pg with errors below 7%. The content of raftilose was determined in dietary cream; application can be extended to other similar polymers. It should be stressed that no special optimisation of the MALDI process was carried out. A common MALDI matrix and sample preparation were used and only the basic parameters, such as sampling and laser energy, were optimised prior to quantification.

  17. The technical CCDs in ESPRESSO: usage, performances, and network requirements

    Science.gov (United States)

    Calderone, G.; Baldini, V.; Cirami, R.; Coretti, I.; Cristiani, S.; Di Marcantonio, P.; Landoni, M.; Mégevand, D.; Riva, M.; Santin, P.

    2016-07-01

    The Echelle Spectrograph for Rocky Exoplanets and Stable Spectral Observations (ESPRESSO) requires active-loop stabilization of the light path from the telescope to the spectrograph, in order to achieve its centimeter-per- second precision goal. This task is accomplished by moving the mirrors placed along the light path by means of piezoelectric actuators. Two cameras are used to acquire the field and pupil images, and the required corrections are dynamically calculated and applied to the piezos. In this paper we will discuss the camera usage, performance and network bandwidth requirements for the ESPRESSO scientific operations.

  18. Quantification of biophysical adaptation benefits from Climate-Smart Agriculture using a Bayesian Belief Network.

    Science.gov (United States)

    de Nijs, Patrick J; Berry, Nicholas J; Wells, Geoff J; Reay, Dave S

    2014-10-20

    The need for smallholder farmers to adapt their practices to a changing climate is well recognised, particularly in Africa. The cost of adapting to climate change in Africa is estimated to be $20 to $30 billion per year, but the total amount pledged to finance adaptation falls significantly short of this requirement. The difficulty of assessing and monitoring when adaptation is achieved is one of the key barriers to the disbursement of performance-based adaptation finance. To demonstrate the potential of Bayesian Belief Networks for describing the impacts of specific activities on climate change resilience, we developed a simple model that incorporates climate projections, local environmental data, information from peer-reviewed literature and expert opinion to account for the adaptation benefits derived from Climate-Smart Agriculture activities in Malawi. This novel approach allows assessment of vulnerability to climate change under different land use activities and can be used to identify appropriate adaptation strategies and to quantify biophysical adaptation benefits from activities that are implemented. We suggest that multiple-indicator Bayesian Belief Network approaches can provide insights into adaptation planning for a wide range of applications and, if further explored, could be part of a set of important catalysts for the expansion of adaptation finance.

  19. FES Science Network Requirements - Report of the Fusion Energy Sciences Network Requirements Workshop Conducted March 13 and 14, 2008

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian; Dart, Eli; Tierney, Brian

    2008-07-10

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In March 2008, ESnet and the Fusion Energy Sciences (FES) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the FES Program Office. Most sites that conduct data-intensive activities (the Tokamaks at GA and MIT, the supercomputer centers at NERSC and ORNL) show a need for on the order of 10 Gbps of network bandwidth for FES-related work within 5 years. PPPL reported a need for 8 times that (80 Gbps) in that time frame. Estimates for the 5-10 year time period are up to 160 Mbps for large simulations. Bandwidth requirements for ITER range from 10 to 80 Gbps. In terms of science process and collaboration structure, it is clear that the proposed Fusion Simulation Project (FSP) has the potential to significantly impact the data movement patterns and therefore the network requirements for U.S. fusion science. As the FSP is defined over the next two years, these changes will become clearer. Also, there is a clear and present unmet need for better network connectivity between U.S. FES sites and two Asian fusion experiments--the EAST Tokamak in China and the KSTAR Tokamak in South Korea. In addition to achieving its goal of collecting and characterizing the network requirements of the science endeavors funded by the FES Program Office, the workshop emphasized that there is a need for research into better ways of conducting remote

  20. IX : An OS for datacenter applications with aggressive networking requirements

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The conventional wisdom is that aggressive networking requirements, such as high packet rates for small messages and microsecond-scale tail latency, are best addressed outside the kernel, in a user-level networking stack. We present IX, a dataplane operating system designed to support low-latency, high-throughput and high-connection count applications.  Like classic operating systems such as Linux, IX provides strong protection guarantees to the networking stack.  However, and unlike classic operating systems, IX is designed for the ground up to support applications with aggressive networking requirements on dense multi-core platforms with 10GbE and 40GbE Ethernet NICs.  IX outperforms Linux by an order of magnitude on micro benchmarks, and by up to 3.6x when running an unmodified memcached, a popular key-value store. The presentation is based on the joint work with Adam Belay, George Prekas, Ana Klimovic, Sam Grossman and Christos Kozyrakis, published at OSDI 2014; Best P...

  1. High Energy Physics and Nuclear Physics Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Dart, Eli; Bauerdick, Lothar; Bell, Greg; Ciuffo, Leandro; Dasu, Sridhara; Dattoria, Vince; De, Kaushik; Ernst, Michael; Finkelson, Dale; Gottleib, Steven; Gutsche, Oliver; Habib, Salman; Hoeche, Stefan; Hughes-Jones, Richard; Ibarra, Julio; Johnston, William; Kisner, Theodore; Kowalski, Andy; Lauret, Jerome; Luitz, Steffen; Mackenzie, Paul; Maguire, Chales; Metzger, Joe; Monga, Inder; Ng, Cho-Kuen; Nielsen, Jason; Price, Larry; Porter, Jeff; Purschke, Martin; Rai, Gulshan; Roser, Rob; Schram, Malachi; Tull, Craig; Watson, Chip; Zurawski, Jason

    2014-03-02

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements needed by instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 25 years. In August 2013, ESnet and the DOE SC Offices of High Energy Physics (HEP) and Nuclear Physics (NP) organized a review to characterize the networking requirements of the programs funded by the HEP and NP program offices. Several key findings resulted from the review. Among them: 1. The Large Hadron Collider?s ATLAS (A Toroidal LHC Apparatus) and CMS (Compact Muon Solenoid) experiments are adopting remote input/output (I/O) as a core component of their data analysis infrastructure. This will significantly increase their demands on the network from both a reliability perspective and a performance perspective. 2. The Large Hadron Collider (LHC) experiments (particularly ATLAS and CMS) are working to integrate network awareness into the workflow systems that manage the large number of daily analysis jobs (1 million analysis jobs per day for ATLAS), which are an integral part of the experiments. Collaboration with networking organizations such as ESnet, and the consumption of performance data (e.g., from perfSONAR [PERformance Service Oriented Network monitoring Architecture]) are critical to the success of these efforts. 3. The international aspects of HEP and NP collaborations continue to expand. This includes the LHC experiments, the Relativistic Heavy Ion Collider (RHIC) experiments, the Belle II Collaboration, the Large Synoptic Survey Telescope (LSST), and others. The international nature of these collaborations makes them heavily

  2. Assessing Requirements Volatility and Risk Using Bayesian Networks

    Science.gov (United States)

    Russell, Michael S.

    2010-01-01

    There are many factors that affect the level of requirements volatility a system experiences over its lifecycle and the risk that volatility imparts. Improper requirements generation, undocumented user expectations, conflicting design decisions, and anticipated / unanticipated world states are representative of these volatility factors. Combined, these volatility factors can increase programmatic risk and adversely affect successful system development. This paper proposes that a Bayesian Network can be used to support reasonable judgments concerning the most likely sources and types of requirements volatility a developing system will experience prior to starting development and by doing so it is possible to predict the level of requirements volatility the system will experience over its lifecycle. This assessment offers valuable insight to the system's developers, particularly by providing a starting point for risk mitigation planning and execution.

  3. Real-time PCR assays for hepatitis B virus DNA quantification may require two different targets.

    Science.gov (United States)

    Liu, Chao; Chang, Le; Jia, Tingting; Guo, Fei; Zhang, Lu; Ji, Huimin; Zhao, Junpeng; Wang, Lunan

    2017-05-12

    Quantification Hepatitis B virus (HBV) DNA plays a critical role in the management of chronic HBV infections. However, HBV is a DNA virus with high levels of genetic variation, and drug-resistant mutations have emerged with the use of antiviral drugs. If a mutation caused a sequence mismatched in the primer or probe of a commercial DNA quantification kit, this would lead to an underestimation of the viral load of the sample. The aim of this study was to determine whether commercial kits, which use only one pair of primers and a single probe, accurately quantify the HBV DNA levels and to develop an improved duplex real-time PCR assay. We developed a new duplex real-time PCR assay that used two pairs of primers and two probes based on the conserved S and C regions of the HBV genome. We performed HBV DNA quantitative detection of HBV samples and compared the results of our duplex real-time PCR assays with the COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. The target region of the discordant sample was amplified, sequenced, and validated using plasmid. The results of the duplex real-time PCR were in good accordance with the commercial COBAS TaqMan HBV Test version 2 and Daan real-time PCR assays. We showed that two samples from Chinese HBV infections underestimated viral loads when quantified by the Roche kit because of a mismatch between the viral sequence and the reverse primer of the Roche kit. The HBV DNA levels of six samples were undervalued by duplex real-time PCR assays of the C region because of mutations in the primer of C region. We developed a new duplex real-time PCR assay, and the results of this assay were similar to the results of commercial kits. The HBV DNA level could be undervalued when using the COBAS TaqMan HBV Test version 2 for Chinese HBV infections owing to a mismatch with the primer/probe. A duplex real-time PCR assay based on the S and C regions could solve this problem to some extent.

  4. BER Science Network Requirements Workshop -- July 26-27,2007

    Energy Technology Data Exchange (ETDEWEB)

    Tierney, Brian L.; Dart, Eli

    2008-02-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States of America. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In July 2007, ESnet and the Biological and Environmental Research (BER) Program Office of the DOE Office of Science organized a workshop to characterize the networking requirements of the science programs funded by the BER Program Office. These included several large programs and facilities, including Atmospheric Radiation Measurement (ARM) Program and the ARM Climate Research Facility (ACRF), Bioinformatics and Life Sciences Programs, Climate Sciences Programs, the Environmental Molecular Sciences Laboratory at PNNL, the Joint Genome Institute (JGI). National Center for Atmospheric Research (NCAR) also participated in the workshop and contributed a section to this report due to the fact that a large distributed data repository for climate data will be established at NERSC, ORNL and NCAR, and this will have an effect on ESnet. Workshop participants were asked to codify their requirements in a 'case study' format, which summarizes the instruments and facilities necessary for the science and the process by which the science is done, with emphasis on the network services needed and the way in which the network is used. Participants were asked to consider three time scales in their case studies--the near term (immediately and up to 12 months in the future), the medium term (3-5 years in the future), and the long term (greater than 5 years in the future). In addition to achieving its goal of collecting and

  5. GSM Network Traffic Analysis | Ani | Nigerian Journal of Technology

    African Journals Online (AJOL)

    GSM networks are traffic intensive specifically the signaling traffic. Evolvement of effective and efficient performance management strategy requires accurate quantification of network signaling traffic volume along side with the user traffic volume. Inaccurate quantification may lead to serious network traffic congestion and ...

  6. Photoactivatable green fluorescent protein-based visualization and quantification of mitochondrial fusion and mitochondrial network complexity in living cells.

    Science.gov (United States)

    Karbowski, Mariusz; Cleland, Megan M; Roelofs, Brian A

    2014-01-01

    Technological improvements in microscopy and the development of mitochondria-specific imaging molecular tools have illuminated the dynamic rearrangements of these essential organelles. These rearrangements are mainly the result of two opposing processes: mitochondrial fusion and mitochondrial fission. Consistent with this, in addition to mitochondrial motility, these two processes are major factors determining the overall degree of continuity of the mitochondrial network, as well as the average size of mitochondria within the cell. In this chapter, we detail the use of advanced confocal microscopy and mitochondrial matrix-targeted photoactivatable green fluorescent protein (mito-PAGFP) for the investigation of mitochondrial dynamics. We focus on direct visualization and quantification of mitochondrial fusion and mitochondrial network complexity in living mammalian cells. These assays were instrumental in important recent discoveries within the field of mitochondrial biology, including the role of mitochondrial fusion in the activation of mitochondrial steps in apoptosis, participation of Bcl-2 family proteins in mitochondrial morphogenesis, and stress-induced mitochondrial hyperfusion. We present some basic directions that should be helpful in designing mito-PAGFP-based experiments. Furthermore, since analyses of mitochondrial fusion using mito-PAGFP-based assays rely on time-lapse imaging, critical parameters of time-lapse microscopy and cell preparation are also discussed.

  7. Validation and quantification of uncertainty in coupled climate models using network analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bracco, Annalisa [Georgia Inst. of Technology, Atlanta, GA (United States)

    2015-08-10

    We developed a fast, robust and scalable methodology to examine, quantify, and visualize climate patterns and their relationships. It is based on a set of notions, algorithms and metrics used in the study of graphs, referred to as complex network analysis. This approach can be applied to explain known climate phenomena in terms of an underlying network structure and to uncover regional and global linkages in the climate system, while comparing general circulation models outputs with observations. The proposed method is based on a two-layer network representation, and is substantially new within the available network methodologies developed for climate studies. At the first layer, gridded climate data are used to identify ‘‘areas’’, i.e., geographical regions that are highly homogeneous in terms of the given climate variable. At the second layer, the identified areas are interconnected with links of varying strength, forming a global climate network. The robustness of the method (i.e. the ability to separate between topological distinct fields, while identifying correctly similarities) has been extensively tested. It has been proved that it provides a reliable, fast framework for comparing and ranking the ability of climate models of reproducing observed climate patterns and their connectivity. We further developed the methodology to account for lags in the connectivity between climate patterns and refined our area identification algorithm to account for autocorrelation in the data. The new methodology based on complex network analysis has been applied to state-of-the-art climate model simulations that participated to the last IPCC (International Panel for Climate Change) assessment to verify their performances, quantify uncertainties, and uncover changes in global linkages between past and future projections. Network properties of modeled sea surface temperature and rainfall over 1956–2005 have been constrained towards observations or reanalysis data sets

  8. Energy efficiency in future wireless networks: cognitive radio standardization requirements

    CSIR Research Space (South Africa)

    Masonta, M

    2012-09-01

    Full Text Available Energy consumption of mobile and wireless networks and devices is significant, indirectly increasing greenhouse gas emissions and energy costs for operators. Cognitive radio (CR) solutions can save energy for such networks and devices; moreover...

  9. Simultaneous UV-Vis spectrophotometric quantification of ternary basic dye mixtures by partial least squares and artificial neural networks.

    Science.gov (United States)

    Hassaninejad-Darzi, Seyed Karim; Torkamanzadeh, Mohammad

    2016-11-01

    One of the main difficulties in quantification of dyes in industrial wastewaters is the fact that dyes are usually in complex mixtures rather than being pure. Here we report the development of two rapid and powerful methods, partial least squares (PLS-1) and artificial neural network (ANN), for spectral resolution of a highly overlapping ternary dye system in the presence of interferences. To this end, Crystal Violet (CV), Malachite Green (MG) and Methylene Blue (MB) were selected as three model dyes whose UV-Vis absorption spectra highly overlap each other. After calibration, both prediction models were validated through testing with an independent spectra-concentration dataset, in which high correlation coefficients (R 2 ) of 0.998, 0.999 and 0.999 were obtained by PLS-1 and 0.997, 0.999 and 0.999 were obtained by ANN for CV, MG and MB, respectively. Having shown a relative error of prediction of less than 3% for all the dyes tested, both PLS-1 and ANN models were found to be highly accurate in simultaneous determination of dyes in pure aqueous samples. Using net-analyte signal concept, the quantitative determination of dyes spiked in seawater samples was carried out successfully by PLS-1 with satisfactory recoveries (90-101%).

  10. Quantification of Graph Complexity Based on the Edge Weight Distribution Balance: Application to Brain Networks.

    Science.gov (United States)

    Gomez-Pilar, Javier; Poza, Jesús; Bachiller, Alejandro; Gómez, Carlos; Núñez, Pablo; Lubeiro, Alba; Molina, Vicente; Hornero, Roberto

    2017-05-23

    The aim of this study was to introduce a novel global measure of graph complexity: Shannon graph complexity (SGC). This measure was specifically developed for weighted graphs, but it can also be applied to binary graphs. The proposed complexity measure was designed to capture the interplay between two properties of a system: the 'information' (calculated by means of Shannon entropy) and the 'order' of the system (estimated by means of a disequilibrium measure). SGC is based on the concept that complex graphs should maintain an equilibrium between the aforementioned two properties, which can be measured by means of the edge weight distribution. In this study, SGC was assessed using four synthetic graph datasets and a real dataset, formed by electroencephalographic (EEG) recordings from controls and schizophrenia patients. SGC was compared with graph density (GD), a classical measure used to evaluate graph complexity. Our results showed that SGC is invariant with respect to GD and independent of node degree distribution. Furthermore, its variation with graph size [Formula: see text] is close to zero for [Formula: see text]. Results from the real dataset showed an increment in the weight distribution balance during the cognitive processing for both controls and schizophrenia patients, although these changes are more relevant for controls. Our findings revealed that SGC does not need a comparison with null-hypothesis networks constructed by a surrogate process. In addition, SGC results on the real dataset suggest that schizophrenia is associated with a deficit in the brain dynamic reorganization related to secondary pathways of the brain network.

  11. Verifying cell loss requirements in high-speed communication networks

    Directory of Open Access Journals (Sweden)

    Kerry W. Fendick

    1998-01-01

    Full Text Available In high-speed communication networks it is common to have requirements of very small cell loss probabilities due to buffer overflow. Losses are measured to verify that the cell loss requirements are being met, but it is not clear how to interpret such measurements. We propose methods for determining whether or not cell loss requirements are being met. A key idea is to look at the stream of losses as successive clusters of losses. Often clusters of losses, rather than individual losses, should be regarded as the important “loss events”. Thus we propose modeling the cell loss process by a batch Poisson stochastic process. Successive clusters of losses are assumed to arrive according to a Poisson process. Within each cluster, cell losses do not occur at a single time, but the distance between losses within a cluster should be negligible compared to the distance between clusters. Thus, for the purpose of estimating the cell loss probability, we ignore the spaces between successive cell losses in a cluster of losses. Asymptotic theory suggests that the counting process of losses initiating clusters often should be approximately a Poisson process even though the cell arrival process is not nearly Poisson. The batch Poisson model is relatively easy to test statistically and fit; e.g., the batch-size distribution and the batch arrival rate can readily be estimated from cell loss data. Since batch (cluster sizes may be highly variable, it may be useful to focus on the number of batches instead of the number of cells in a measurement interval. We also propose a method for approximately determining the parameters of a special batch Poisson cell loss with geometric batch-size distribution from a queueing model of the buffer content. For this step, we use a reflected Brownian motion (RBM approximation of a G/D/1/C queueing model. We also use the RBM model to estimate the input burstiness given the cell loss rate. In addition, we use the RBM model to

  12. A continous Bayesian network for earth dams' risk assessment: methodology and quantification

    NARCIS (Netherlands)

    Morales-Napoles, O.; Delgado-Hernadez-D.J.; De-Leon-Escobedo, D.; Arteaga-Arcos, J.C.

    2013-01-01

    Dams’ safety is highly important for authorities around the world. The impacts of a dam failure can be enormous. Models for investigating dam safety are required for helping decision-makers to mitigate the possible adverse consequences of flooding. A model for earth dam safety must specify clearly

  13. User requirements and future expectations for geosensor networks – an assessment

    NARCIS (Netherlands)

    Kooistra, L.; Thessler, S.; Bregt, A.K.

    2009-01-01

    Considerable progress has been made on the technical development of sensor networks. However increasing attention is now also required for the broad diversity of end-user requirements for the deployment of sensor networks. An expert survey on the user requirements and future expectations for sensor

  14. New generation cellular networks requirements investigation and their deployment in Ukraine possibilities

    OpenAIRE

    Одарченко, Р. С.; Національний авіаційний університет; Абакумова, А. О.; Національний авіаційний університет; Дика, Н. В.; Національний авіаційний університет

    2016-01-01

    This paper analyzes the main requirements for next generation mobile networks, in particular, wasconsidered a fundamentally new concept of building cellular networks - 5G, principles of their design.Conducted a detailed analysis of this generation of cellular networks. Also in this paper the analysisof development of cellular networks and the prospects for the introduction of new technologies inUkraine. There are forward-looking assumptions about stages of standardization and deployment ofcel...

  15. Meeting explosive growth requirements in the metro optical network

    Science.gov (United States)

    Gibbemeyer, Alan; Finkenzeller, Michael

    2009-01-01

    The metro optical network growth continues so far unabated by the slowing economy. Main drivers for this are enterprise connectivity, triple play and high-bandwidth hungry internet applications. Every day more and more of the population is connected with a projection to have five (5) billion people connected by 2010 and an overall traffic increase of one-hundred fold by 2015. While key applications drive these deployments, it is the decrease in network cost that is the bandwidth enabler. Stagnant average revenue per user (ARPU) makes further reduction in the total cost of ownership key. As costs progress due to volume and technology maturity, prices drop and a stronger demand for bandwidth is generated in the market. Today the 10G Ethernet LAN PHY services drive this growth and the cost for 10G hardware continues to improve further enabling profitable growth. While 10G is the key transport technology today, there is a push to bring higher line rates into the metro deployments. 40G is currently undergoing a mass adoption in the long-haul core networks. The volumes in long-haul network deployments are driving down the costs making it a viable evolution path for the metro networks over time.

  16. Requirements for data integration platforms in biomedical research networks: a reference model.

    Science.gov (United States)

    Ganzinger, Matthias; Knaup, Petra

    2015-01-01

    Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.

  17. Establishing a reliable multiple reaction monitoring-based method for the quantification of obesity-associated comorbidities in serum and adipose tissue requires intensive clinical validation.

    Science.gov (United States)

    Oberbach, Andreas; Schlichting, Nadine; Neuhaus, Jochen; Kullnick, Yvonne; Lehmann, Stefanie; Heinrich, Marco; Dietrich, Arne; Mohr, Friedrich Wilhelm; von Bergen, Martin; Baumann, Sven

    2014-12-05

    Multiple reaction monitoring (MRM)-based mass spectrometric quantification of peptides and their corresponding proteins has been successfully applied for biomarker validation in serum. The option of multiplexing offers the chance to analyze various proteins in parallel, which is especially important in obesity research. Here, biomarkers that reflect multiple comorbidities and allow monitoring of therapy outcomes are required. Besides the suitability of established MRM assays for serum protein quantification, it is also feasible for analysis of tissues secreting the markers of interest. Surprisingly, studies comparing MRM data sets with established methods are rare, and therefore the biological and clinical value of most analytes remains questionable. A MRM method using nano-UPLC-MS/MS for the quantification of obesity related surrogate markers for several comorbidities in serum, plasma, visceral and subcutaneous adipose tissue was established. Proteotypic peptides for complement C3, adiponectin, angiotensinogen, and plasma retinol binding protein (RBP4) were quantified using isotopic dilution analysis and compared to the standard ELISA method. MRM method variabilities were mainly below 10%. The comparison with other MS-based approaches showed a good correlation. However, large differences in absolute quantification for complement C3 and adiponectin were obtained compared to ELISA, while less marked differences were observed for angiotensinogen and RBP4. The verification of MRM in obesity was performed to discriminate first lean and obese phenotype and second to monitor excessive weight loss after gastric bypass surgery in a seven-month follow-up. The presented MRM assay was able to discriminate obese phenotype from lean and monitor weight loss related changes of surrogate markers. However, inclusion of additional biomarkers was necessary to interpret the MRM data on obesity phenotype properly. In summary, the development of disease-related MRMs should include a

  18. A Wireless Sensor Network for Hospital Security: From User Requirements to Pilot Deployment

    Directory of Open Access Journals (Sweden)

    Kaseva Ville

    2011-01-01

    Full Text Available Increasing amount of Wireless Sensor Network (WSN applications require low network delays. However, current research on WSNs has mainly concentrated on optimizing energy-efficiency omitting low network delays. This paper presents a novel WSN design targeted at applications requiring low data transfer delays and high reliability. We present the whole design flow from user requirements to an actual pilot deployment in a real hospital unit. The WSN includes multihop low-delay data transfer and energy-efficient mobile nodes reaching lifetime of years with small batteries. The nodes communicate using a low-cost low-power 2.4 GHz radio. The network is used in a security application with which personnel can send alarms in threatening situations. Also, a multitude of sensor measurements and actuator control is possible with the WSN. A full-scale pilot deployment is extensively experimented for performance results. Currently, the pilot network is in use at the hospital.

  19. Towards requirements elicitation in service-oriented business networks using value and goal modelling

    NARCIS (Netherlands)

    Mantovaneli Pessoa, Rodrigo; van Sinderen, Marten J.; Quartel, Dick; Shishkov, Boris; Cordeiro, J.; Ranchordas, A.

    2009-01-01

    Due to the contemporary trends towards increased focus on core competences and outsourcing of non-core activities, enterprises are forming strategic alliances and building business networks. This often requires cross enterprise interoperability and integration of their information systems, leading

  20. Water Volume Required to Carve the Martian Valley Networks: Updated Sediment Volume

    Science.gov (United States)

    Rosenberg, E. N.; Head, J. W.; Cassanelli, J.; Palumbo, A.; Weiss, D.

    2017-10-01

    In order to gain insights into the climate of early Mars, we estimate the volume of water that was required to erode the valley networks (VNs). We update previous results with a new VN cavity volume measurement.

  1. Quantification of Endogenous Retinoids

    Science.gov (United States)

    Kane, Maureen A.; Napoli, Joseph L.

    2014-01-01

    Numerous physiological processes require retinoids, including development, nervous system function, immune responsiveness, proliferation, differentiation, and all aspects of reproduction. Reliable retinoid quantification requires suitable handling and, in some cases, resolution of geometric isomers that have different biological activities. Here we describe procedures for reliable and accurate quantification of retinoids, including detailed descriptions for handling retinoids, preparing standard solutions, collecting samples and harvesting tissues, extracting samples, resolving isomers, and detecting with high sensitivity. Sample-specific strategies are provided for optimizing quantification. Approaches to evaluate assay performance also are provided. Retinoid assays described here for mice also are applicable to other organisms including zebrafish, rat, rabbit, and human and for cells in culture. Retinoid quantification, especially that of retinoic acid, should provide insight into many diseases, including Alzheimer’s disease, type 2 diabetes, obesity, and cancer. PMID:20552420

  2. Requirements and System Architecture for a Healthcare Wireless Body Area Network

    DEFF Research Database (Denmark)

    Hansen, Finn Overgaard; Toftegaard, Thomas Skjødeberg

    Wireless body area networks enable new opportunities for personal healthcare monitoring and personal healthcare applications. This paper presents a comprehensive set of requirements and challenges for building a wireless body area network to support diverse user groups and a corresponding set...... of healthcare applications. Based on the identified requirements, the paper presents an architecture for a wireless body area network and describes how this architecture is connected to an existing it-infrastructure supporting healthcare at home. Finally the paper presents our on-going research with development...

  3. Motivation and requirements for determining a Network Warfare Capability

    CSIR Research Space (South Africa)

    Veerasamy, N

    2010-06-01

    Full Text Available technology for their day-to-day operation (Panda, Giordano 1999). A report by the Defense Science Board in the United States of America (USA) explains that challenges in the present age include information assurance, and that this requires new...", International Journal of Information Security, vol. 1, no. 1, pp. 3-13. Munro, N. 1996, "Sketching a national Information Warfare defense plan", Communications of the ACM, vol. 39, no. 11, pp. 15-17. Panda, B. & Giordano, J. 1999, "Defensive Information...

  4. A spatially distributed isotope sampling network in a snow-dominated catchment for the quantification of snow meltwater

    Science.gov (United States)

    Rücker, Andrea; Boss, Stefan; Von Freyberg, Jana; Zappa, Massimiliano; Kirchner, James

    2017-04-01

    In mountainous catchments with seasonal snowpacks, river discharge in downstream valleys is largely sustained by snowmelt in spring and summer. Future climate warming will likely reduce snow volumes and lead to earlier and faster snowmelt in such catchments. This, in turn, may increase the risk of summer low flows and hydrological droughts. Improved runoff predictions are thus required in order to adapt water management to future climatic conditions and to assure the availability of fresh water throughout the year. However, a detailed understanding of the hydrological processes is crucial to obtain robust predictions of river streamflow. This in turn requires fingerprinting source areas of streamflow, tracing water flow pathways, and measuring timescales of catchment storage, using tracers such as stable water isotopes (18O, 2H). For this reason, we have established an isotope sampling network in the Alptal, a snowmelt-dominated catchment (46.4 km2) in Central-Switzerland, as part of the SREP-Drought project (Snow Resources and the Early Prediction of hydrological DROUGHT in mountainous streams). Precipitation and snow cores are analyzed for their isotopic signature at daily or weekly intervals. Three-week bulk samples of precipitation are also collected on a transect along the Alptal valley bottom, and along an elevational transect perpendicular to the Alptal valley axis. Streamwater samples are taken at the catchment outlet as well as in two small nested sub-catchments (< 2 km2). In order to catch the isotopic signature of naturally-occurring snowmelt, a fully automatic snow lysimeter system was developed, which also facilitates real-time monitoring of snowmelt events, system status and environmental conditions (air and soil temperature). Three lysimeter systems were installed within the catchment, in one forested site and two open field sites at different elevations, and have been operational since November 2016. We will present the isotope time series from our

  5. Quantification of free plus conjugated indoleacetic acid in arabidopsis requires correction for the nonenzymatic conversion of indolic nitriles.

    Science.gov (United States)

    Llić, N; Normanly, J; Cohen, J D

    1996-07-01

    The genetic advantages to the use of Arabidopsis thaliana mutants for the study of auxin metabolism previously have been partially offset by the complexity of indolic metabolism in this plant and by the lack of proper methods. To address some of these problems, we developed isotopic labeling methods to determine amounts and examine the metabolism of indolic compounds in Arabidopsis. Isolation and indentification of endogenous indole-3-acetonitrile (IAN; a possible precursor of the auxin indole-3-acetic acid [IAA]) was carried out under mild conditions, thus proving its natural occurrence. We describe here the synthesis of 13C1-labeled IAN and its utility in the gas chromatography-mass spectrometry quantification of endogenous IAN levels. We also quantified the nonenzymatic conversion of IAN to IAA under conditions used to hydrolyze IAA conjugates. 13C1-Labeled IAN was used to assess the contribution of IAN to measured IAA following hydrolysis of IAA conjugates. We studied the stability and breakdown of the indolic glucosinolate glucobrassicin, which is known to be present in Arabidopsis. This is potentially an important concern when using Arabidopsis for studies of indolic biochemistry, since the levels of indolic auxins and auxin precursors are well below the levels of the indolic glucosinolates. We found that under conditions of extraction and base hydrolysis, formation of IAA from glucobrassicin was negligible.

  6. [Research Networks in Public Health: Requirements for Sustainability and Effectiveness - a Sociological Perspective].

    Science.gov (United States)

    Pfaff, Holger; Ohlmeier, Silke

    2017-11-01

    The Public Health White Paper draws up a vision of public health as a living, decentralized network that can help improve the health of the population in a sustained fashion. However, the central question remains open as to which prerequisites public health networks should fulfill in order to be effective in the long term. The aim of this paper is to provide a sociological view of the issue and offer some discussion ideas. Parsons' structural functionalism leads to the thesis that science networks in public health require structures that ensure that the 4 basic functions of viable social networks - (1) adaptation, (2) goal attainment, (3) integration and (4) latent pattern maintenance - are fulfilled. On this theoretical basis, suggestions are made to establish functional formal structures in public health networks. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Molecular Quantification of the Florida Red Tide Dinoflagellate and the Development of Low Cost, Volunteer-attended Handheld Sensor Networks

    Science.gov (United States)

    Nieuwkerk, D.; Ulrich, R. M.; Paul, J. H.; Hubbard, K.; Kirkpatrick, B. A.; Fanara, T. A.; Bruzek, S.; Hoeglund, A.

    2016-02-01

    Harmful algal blooms of the dinoflagellate Karenia brevis can cause massive fish-kills and marine mammal mortalities, as well as impact human health via the consumption of brevetoxin-contaminated shellfish and the inhalation of aerosolized toxins. There is a strong effort to predict human health impacts by monitoring the bloom stages of K. brevis, and to prevent health impacts by closing shellfish beds when K. brevis cell concentrations reach toxic levels. The current standard method for quantifying K. brevis is by microscopic enumeration, which requires taxonomic expertise to discern K. brevis cells from other Karenia species as well as a long turnover time to generate data, which limits the number of water samples that can be processed. This EPA-funded study compared a variety of technologies against the current standard (microscopic counts) to quantify the number of K. brevis cells per liter in the water column. Results of this study showed a strong correlation between Real Time Nucleic Acid Sequence-Based Amplification (RT-NASBA) and enumeration by microscopy performed by members of the Florida Fish and Wildlife Research Institute, who are responsible for such monitoring. We are adapting the bench-top RT-NASBA assay to the AmpliFire platform (a handheld sensor that can be used in the field), for point of need K. brevis detection. These handheld sensors will be used by a trained volunteer network and government agencies (FWC, NOAA, and Mote Marine Lab.) to quantify K. brevis cells in the water column of core Gulf of Mexico sites; the results from these sensors will be reported back to the GCOOS observation systems to provide real-time monitoring of K. brevis counts. The real-time information will allow agencies to better monitor fishery closures and predict human health impacts of harmful algal blooms, because a larger number of samples can be processed each week, as the NASBA process removes the rate-limiting step of microscope time.

  8. Quantification of changes in language-related brain areas in autism spectrum disorders using large-scale network analysis.

    Science.gov (United States)

    Goch, Caspar J; Stieltjes, Bram; Henze, Romy; Hering, Jan; Poustka, Luise; Meinzer, Hans-Peter; Maier-Hein, Klaus H

    2014-05-01

    Diagnosis of autism spectrum disorders (ASD) is difficult, as symptoms vary greatly and are difficult to quantify objectively. Recent work has focused on the assessment of non-invasive diffusion tensor imaging-based biomarkers that reflect the microstructural characteristics of neuronal pathways in the brain. While tractography-based approaches typically analyze specific structures of interest, a graph-based large-scale network analysis of the connectome can yield comprehensive measures of larger-scale architectural patterns in the brain. Commonly applied global network indices, however, do not provide any specificity with respect to functional areas or anatomical structures. Aim of this work was to assess the concept of network centrality as a tool to perform locally specific analysis without disregarding the global network architecture and compare it to other popular network indices. We create connectome networks from fiber tractographies and parcellations of the human brain and compute global network indices as well as local indices for Wernicke's Area, Broca's Area and the Motor Cortex. Our approach was evaluated on 18 children suffering from ASD and 18 typically developed controls using magnetic resonance imaging-based cortical parcellations in combination with diffusion tensor imaging tractography. We show that the network centrality of Wernicke's area is significantly (palterations. This could reflect the reduced capacity for comprehension of language in ASD. The betweenness centrality could potentially be an important metric in the development of future diagnostic tools in the clinical context of ASD diagnosis. Our results further demonstrate the applicability of large-scale network analysis tools in the domain of region-specific analysis with a potential application in many different psychological disorders.

  9. Key Technologies in the Context of Future Networks: Operational and Management Requirements

    Directory of Open Access Journals (Sweden)

    Lorena Isabel Barona López

    2016-12-01

    Full Text Available The concept of Future Networks is based on the premise that current infrastructures require enhanced control, service customization, self-organization and self-management capabilities to meet the new needs in a connected society, especially of mobile users. In order to provide a high-performance mobile system, three main fields must be improved: radio, network, and operation and management. In particular, operation and management capabilities are intended to enable business agility and operational sustainability, where the addition of new services does not imply an excessive increase in capital or operational expenditures. In this context, a set of key-enabled technologies have emerged in order to aid in this field. Concepts such as Software Defined Network (SDN, Network Function Virtualization (NFV and Self-Organized Networks (SON are pushing traditional systems towards the next 5G network generation.This paper presents an overview of the current status of these promising technologies and ongoing works to fulfill the operational and management requirements of mobile infrastructures. This work also details the use cases and the challenges, taking into account not only SDN, NFV, cloud computing and SON but also other paradigms.

  10. Specification of requirements for health social-network as Personal Health Record (PHR system

    Directory of Open Access Journals (Sweden)

    Mozhgan Tanhapour

    2015-09-01

    Conclusion: The proposed set of requirements are qualitatively compared with the other similar systems. Using the proposed health social network that provides PHR capabilities for its users will have an irrefutable impact on quality and efficiency of patient-centered care, and play an important role in improving the health of society.

  11. Quantification of Environmental Flow Requirements to Support Ecosystem Services of Oasis Areas: A Case Study in Tarim Basin, Northwest China

    Directory of Open Access Journals (Sweden)

    Jie Xue

    2015-10-01

    Full Text Available Recently, a wide range of quantitative research on the identification of environmental flow requirements (EFRs has been conducted. However, little focus is given to EFRs to maintain multiple ecosystem services in oasis areas. The present study quantifies the EFRs in oasis areas of Tarim Basin, Xinjiang, Northwest China on the basis of three ecosystem services: (1 maintenance of riverine ecosystem health, (2 assurance of the stability of oasis–desert ecotone and riparian (Tugai forests, and (3 restoration of oasis–desert ecotone groundwater. The identified consumptive and non-consumptive water requirements are used to quantify and determine the EFRs in Qira oasis by employing the summation and compatibility rules (maximum principle. Results indicate that the annual maximum, medium, and minimum EFRs are 0.752 × 108, 0.619 × 108, and 0.516 × 108 m3, respectively, which account for 58.75%, 48.36%, and 40.29% of the natural river runoff. The months between April and October are identified as the most important periods to maintain the EFRs. Moreover, the water requirement for groundwater restoration of the oasis–desert ecotone accounts for a large proportion, representing 48.27%, 42.32%, and 37.03% of the total EFRs at maximum, medium, and minimum levels, respectively. Therefore, to allocate the integrated EFRs, focus should be placed on the water demand of the desert vegetation’s groundwater restoration, which is crucial for maintaining desert vegetation to prevent sandstorms and soil erosion. This work provides a reference to quantify the EFRs of oasis areas in arid regions.

  12. Minimum requirements for predictive pore-network modeling of solute transport in micromodels

    Science.gov (United States)

    Mehmani, Yashar; Tchelepi, Hamdi A.

    2017-10-01

    Pore-scale models are now an integral part of analyzing fluid dynamics in porous materials (e.g., rocks, soils, fuel cells). Pore network models (PNM) are particularly attractive due to their computational efficiency. However, quantitative predictions with PNM have not always been successful. We focus on single-phase transport of a passive tracer under advection-dominated regimes and compare PNM with high-fidelity direct numerical simulations (DNS) for a range of micromodel heterogeneities. We identify the minimum requirements for predictive PNM of transport. They are: (a) flow-based network extraction, i.e., discretizing the pore space based on the underlying velocity field, (b) a Lagrangian (particle tracking) simulation framework, and (c) accurate transfer of particles from one pore throat to the next. We develop novel network extraction and particle tracking PNM methods that meet these requirements. Moreover, we show that certain established PNM practices in the literature can result in first-order errors in modeling advection-dominated transport. They include: all Eulerian PNMs, networks extracted based on geometric metrics only, and flux-based nodal transfer probabilities. Preliminary results for a 3D sphere pack are also presented. The simulation inputs for this work are made public to serve as a benchmark for the research community.

  13. Effects of Energy Storage Systems Grid Code Requirements on Interface Protection Performances in Low Voltage Networks

    Directory of Open Access Journals (Sweden)

    Fabio Bignucolo

    2017-03-01

    Full Text Available The ever-growing penetration of local generation in distribution networks and the large diffusion of energy storage systems (ESSs foreseen in the near future are bound to affect the effectiveness of interface protection systems (IPSs, with negative impact on the safety of medium voltage (MV and low voltage (LV systems. With the scope of preserving the main network stability, international and national grid connection codes have been updated recently. Consequently, distributed generators (DGs and storage units are increasingly called to provide stabilizing functions according to local voltage and frequency. This can be achieved by suitably controlling the electronic power converters interfacing small-scale generators and storage units to the network. The paper focuses on the regulating functions required to storage units by grid codes currently in force in the European area. Indeed, even if such regulating actions would enable local units in participating to network stability under normal steady-state operating conditions, it is shown through dynamic simulations that they may increase the risk of unintentional islanding occurrence. This means that dangerous operating conditions may arise in LV networks in case dispersed generators and storage systems are present, even if all the end-users are compliant with currently applied connection standards.

  14. Ordinance on technical requirements and conditions of use of optical distribution networks of the Croatian regulatory agency - Analysis and outlook

    OpenAIRE

    Brusić, Igor; Kittl, Jörg; Ruhle, Ernst-Olav; Žuti, Vladimir

    2011-01-01

    In September 2010 the Croatian regulatory agency (HAKOM) put in force the ordinance on technical requirements and conditions of use of optical distribution networks. With this ordinance the Croatian regulatory agency is looking over the rim by proposing a rather technical approach for the rollout of optical access networks which will have significant influence on the deployment of next generation access networks (NGAN) in Croatia. The ordinance stipulates the requirements that have to be fulf...

  15. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders.

    Science.gov (United States)

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    There is currently limited information on best practices for the development of governance requirements for distributed research networks (DRNs), an emerging model that promotes clinical data reuse and improves timeliness of comparative effectiveness research. Much of the existing information is based on a single type of stakeholder such as researchers or administrators. This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups. This approach is illustrated with an example from the Scalable National Network for Effectiveness Research, which resulted in 91 requirements. These requirements were analyzed against the Fair Information Practice Principles (FIPPs) and Health Insurance Portability and Accountability Act (HIPAA) protected versus non-protected health information. The requirements addressed all FIPPs, showing how a DRN's technical infrastructure is able to fulfill HIPAA regulations, protect privacy, and provide a trustworthy platform for research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Impact of Distributed Generation Grid Code Requirements on Islanding Detection in LV Networks

    Directory of Open Access Journals (Sweden)

    Fabio Bignucolo

    2017-01-01

    Full Text Available The recent growing diffusion of dispersed generation in low voltage (LV distribution networks is entailing new rules to make local generators participate in network stability. Consequently, national and international grid codes, which define the connection rules for stability and safety of electrical power systems, have been updated requiring distributed generators and electrical storage systems to supply stabilizing contributions. In this scenario, specific attention to the uncontrolled islanding issue has to be addressed since currently required anti-islanding protection systems, based on relays locally measuring voltage and frequency, could no longer be suitable. In this paper, the effects on the interface protection performance of different LV generators’ stabilizing functions are analysed. The study takes into account existing requirements, such as the generators’ active power regulation (according to the measured frequency and reactive power regulation (depending on the local measured voltage. In addition, the paper focuses on other stabilizing features under discussion, derived from the medium voltage (MV distribution network grid codes or proposed in the literature, such as fast voltage support (FVS and inertia emulation. Stabilizing functions have been reproduced in the DIgSILENT PowerFactory 2016 software environment, making use of its native programming language. Later, they are tested both alone and together, aiming to obtain a comprehensive analysis on their impact on the anti-islanding protection effectiveness. Through dynamic simulations in several network scenarios the paper demonstrates the detrimental impact that such stabilizing regulations may have on loss-of-main protection effectiveness, leading to an increased risk of unintentional islanding.

  17. The intermediate filament network protein, vimentin, is required for parvoviral infection

    Energy Technology Data Exchange (ETDEWEB)

    Fay, Nikta; Panté, Nelly, E-mail: pante@zoology.ubc.ca

    2013-09-15

    Intermediate filaments (IFs) have recently been shown to serve novel roles during infection by many viruses. Here we have begun to study the role of IFs during the early steps of infection by the parvovirus minute virus of mice (MVM). We found that during early infection with MVM, after endosomal escape, the vimentin IF network was considerably altered, yielding collapsed immunofluorescence staining near the nuclear periphery. Furthermore, we found that vimentin plays an important role in the life cycle of MVM. The number of cells, which successfully replicated MVM, was reduced in infected cells in which the vimentin network was genetically or pharmacologically modified; viral endocytosis, however, remained unaltered. Perinuclear accumulation of MVM-containing vesicles was reduced in cells lacking vimentin. Our data suggests that vimentin is required for the MVM life cycle, presenting possibly a dual role: (1) following MVM escape from endosomes and (2) during endosomal trafficking of MVM. - Highlights: • MVM infection changes the distribution of the vimentin network to perinuclear regions. • Disrupting the vimentin network with acrylamide decreases MVM replication. • MVM replication is significantly reduced in vimentin-null cells. • Distribution of MVM-containing vesicles is affected in MVM infected vimentin-null cells.

  18. An Analysis of Database Replication Technologies with Regard to Deep Space Network Application Requirements

    Science.gov (United States)

    Connell, Andrea M.

    2011-01-01

    The Deep Space Network (DSN) has three communication facilities which handle telemetry, commands, and other data relating to spacecraft missions. The network requires these three sites to share data with each other and with the Jet Propulsion Laboratory for processing and distribution. Many database management systems have replication capabilities built in, which means that data updates made at one location will be automatically propagated to other locations. This project examines multiple replication solutions, looking for stability, automation, flexibility, performance, and cost. After comparing these features, Oracle Streams is chosen for closer analysis. Two Streams environments are configured - one with a Master/Slave architecture, in which a single server is the source for all data updates, and the second with a Multi-Master architecture, in which updates originating from any of the servers will be propagated to all of the others. These environments are tested for data type support, conflict resolution, performance, changes to the data structure, and behavior during and after network or server outages. Through this experimentation, it is determined which requirements of the DSN can be met by Oracle Streams and which cannot.

  19. Improving mine-mill water network design by reducing water and energy requirements

    Energy Technology Data Exchange (ETDEWEB)

    Gunson, A.J.; Klein, B.; Veiga, M. [British Columbia Univ., Vancouver, BC (Canada). Norman B. Keevil Inst. of Mining Engineering

    2010-07-01

    Mining is an energy-intensive industry, and most processing mills use wet processes to separate minerals from ore. This paper discussed water reduction, reuse and recycling options for a mining and mill operation network. A mine water network design was then proposed in order to identify and reduce water and system energy requirements. This included (1) a description of site water balance, (2) a description of potential water sources, (3) a description of water consumers, (4) the construction of energy requirement matrices, and (5) the use of linear programming to reduce energy requirements. The design was used to determine a site water balance as well as to specify major water consumers during mining and mill processes. Potential water supply combinations, water metering technologies, and recycling options were evaluated in order to identify the most efficient energy and water use combinations. The method was used to highlight potential energy savings from the integration of heating and cooling systems with plant water systems. 43 refs., 4 tabs., 3 figs.

  20. Building a transnational biosurveillance network using semantic web technologies: requirements, design, and preliminary evaluation.

    Science.gov (United States)

    Teodoro, Douglas; Pasche, Emilie; Gobeill, Julien; Emonet, Stéphane; Ruch, Patrick; Lovis, Christian

    2012-05-29

    Antimicrobial resistance has reached globally alarming levels and is becoming a major public health threat. Lack of efficacious antimicrobial resistance surveillance systems was identified as one of the causes of increasing resistance, due to the lag time between new resistances and alerts to care providers. Several initiatives to track drug resistance evolution have been developed. However, no effective real-time and source-independent antimicrobial resistance monitoring system is available publicly. To design and implement an architecture that can provide real-time and source-independent antimicrobial resistance monitoring to support transnational resistance surveillance. In particular, we investigated the use of a Semantic Web-based model to foster integration and interoperability of interinstitutional and cross-border microbiology laboratory databases. Following the agile software development methodology, we derived the main requirements needed for effective antimicrobial resistance monitoring, from which we proposed a decentralized monitoring architecture based on the Semantic Web stack. The architecture uses an ontology-driven approach to promote the integration of a network of sentinel hospitals or laboratories. Local databases are wrapped into semantic data repositories that automatically expose local computing-formalized laboratory information in the Web. A central source mediator, based on local reasoning, coordinates the access to the semantic end points. On the user side, a user-friendly Web interface provides access and graphical visualization to the integrated views. We designed and implemented the online Antimicrobial Resistance Trend Monitoring System (ARTEMIS) in a pilot network of seven European health care institutions sharing 70+ million triples of information about drug resistance and consumption. Evaluation of the computing performance of the mediator demonstrated that, on average, query response time was a few seconds (mean 4.3, SD 0.1 × 10

  1. A Deployment Strategy for Multiple Types of Requirements in Wireless Sensor Networks.

    Science.gov (United States)

    Liu, Xuxun

    2015-10-01

    Node deployment is one of the most crucial issues in wireless sensor networks, and it is of realistic significance to complete the deployment task with multiple types of application requirements. In this paper, we propose a deployment strategy for multiple types of requirements to solve the problem of deterministic and grid-based deployment. This deployment strategy consists of three deployment algorithms, which are for different deployment objectives. First, instead of general random search, we put forward a deterministic search mechanism and the related cost-based deployment algorithm, in which nodes are assigned to different groups which are connected by near-shortest paths, and realize significant reduction of path length and deployment cost. Second, rather than ordinary nondirection deployment, we present a notion of counterflow and the related delay-based deployment algorithm, in which the profit of deployment cost and loss of transmission delay are evaluated, and achieve much diminishing of transmission path length and transmission delay. Third, instead of conventional uneven deployment based on the distances to the sink, we propose a concept of node load level and the related lifetime-based deployment algorithm, in which node distribution is determined by the actual load levels and extra nodes are deployed only where really necessary. This contributes to great improvement of network lifetime. Last, extensive simulations are used to test and verify the effectiveness and superiority of our findings.

  2. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  3. Use of genetic algorithms and neural networks to optimize well locations and reduce well requirements

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, V.M.; Rogers, L.L.

    1994-09-01

    A goal common to both the environmental and petroleum industries is the reduction of costs and/or enhancement of profits by the optimal placement of extraction/production and injection wells. Formal optimization techniques facilitate this goal by searching among the potentially infinite number of possible well patterns for ones that best meet engineering and economic objectives. However, if a flow and transport model or reservoir simulator is being used to evaluate the effectiveness of each network of wells, the computational resources required to apply most optimization techniques to real field problems become prohibitively expensive. This paper describes a new approach to field-scale, nonlinear optimization of well patterns that is intended to make such searches tractable on conventional computer equipment. Artificial neural networks (ANNs) are trained to predict selected information that would normally be calculated by the simulator. The ANNs are then embedded in a variant of the genetic algorithm (GA), which drives the search for increasingly effective well patterns and uses the ANNs, rather than the original simulator, to evaluate the effectiveness of each pattern. Once the search is complete, the ANNs are reused in sensitivity studies to give additional information on the performance of individual or clusters of wells.

  4. A Survey of System Architecture Requirements for Health Care-Based Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Abraham O. Fapojuwo

    2011-05-01

    Full Text Available Wireless Sensor Networks (WSNs have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera. However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  5. A survey of system architecture requirements for health care-based wireless sensor networks.

    Science.gov (United States)

    Egbogah, Emeka E; Fapojuwo, Abraham O

    2011-01-01

    Wireless Sensor Networks (WSNs) have emerged as a viable technology for a vast number of applications, including health care applications. To best support these health care applications, WSN technology can be adopted for the design of practical Health Care WSNs (HCWSNs) that support the key system architecture requirements of reliable communication, node mobility support, multicast technology, energy efficiency, and the timely delivery of data. Work in the literature mostly focuses on the physical design of the HCWSNs (e.g., wearable sensors, in vivo embedded sensors, et cetera). However, work towards enhancing the communication layers (i.e., routing, medium access control, et cetera) to improve HCWSN performance is largely lacking. In this paper, the information gleaned from an extensive literature survey is shared in an effort to fortify the knowledge base for the communication aspect of HCWSNs. We highlight the major currently existing prototype HCWSNs and also provide the details of their routing protocol characteristics. We also explore the current state of the art in medium access control (MAC) protocols for WSNs, for the purpose of seeking an energy efficient solution that is robust to mobility and delivers data in a timely fashion. Furthermore, we review a number of reliable transport layer protocols, including a network coding based protocol from the literature, that are potentially suitable for delivering end-to-end reliability of data transmitted in HCWSNs. We identify the advantages and disadvantages of the reviewed MAC, routing, and transport layer protocols as they pertain to the design and implementation of a HCWSN. The findings from this literature survey will serve as a useful foundation for designing a reliable HCWSN and also contribute to the development and evaluation of protocols for improving the performance of future HCWSNs. Open issues that required further investigations are highlighted.

  6. Group Centric Networking: Addressing Information Sharing Requirements at the Tactical Edge

    Science.gov (United States)

    2016-04-10

    tion well with fixed infrastructure and stable links where routes are maintained to enable all-to-all unicast connections. While cellular networks...each collaborative group can tune the network for its tolerances. C. Dynamic Network Adaptation In traditional IP networks, routing parameters such as...enables dynamic network adaptation on a per-group basis, giving greater fidelity in tactical mission planning. D. Group ID Mapping In GCN, interest

  7. What is 5G? Emerging 5G Mobile Services and Network Requirements

    Directory of Open Access Journals (Sweden)

    Heejung Yu

    2017-10-01

    Full Text Available In this paper, emerging 5G mobile services are investigated and categorized from the perspective of not service providers, but end-users. The development of 5G mobile services is based on an intensive analysis of the global trends in mobile services. Additionally, several indispensable service requirements, essential for realizing service scenarios presented, are described. To illustrate the changes in societies and in daily life in the 5G era, five megatrends, including the explosion of mobile data traffic, the rapid increase in connected devices, everything on the cloud, hyper-realistic media for convergence services and knowledge as a service enabled by big-data analysis, are examined. Based on such trends, we classify the new 5G services into five categories in terms of the end-users’ experience as follows: immersive 5G services, intelligent 5G services, omnipresent 5G services, autonomous 5G services and public 5G services. Moreover, several 5G service scenarios in each service category are presented, and essential technical requirements for realizing the aforementioned 5G services are suggested, along with a competitiveness analysis on 5G services/devices/network industries and the current condition of 5G technologies.

  8. Functional dissection of a neuronal network required for cuticle tanning and wing expansion in Drosophila.

    Science.gov (United States)

    Luan, Haojiang; Lemon, William C; Peabody, Nathan C; Pohl, Jascha B; Zelensky, Paul K; Wang, Ding; Nitabach, Michael N; Holmes, Todd C; White, Benjamin H

    2006-01-11

    A subset of Drosophila neurons that expresses crustacean cardioactive peptide (CCAP) has been shown previously to make the hormone bursicon, which is required for cuticle tanning and wing expansion after eclosion. Here we present evidence that CCAP-expressing neurons (NCCAP) consist of two functionally distinct groups, one of which releases bursicon into the hemolymph and the other of which regulates its release. The first group, which we call NCCAP-c929, includes 14 bursicon-expressing neurons of the abdominal ganglion that lie within the expression pattern of the enhancer-trap line c929-Gal4. We show that suppression of activity within this group blocks bursicon release into the hemolymph together with tanning and wing expansion. The second group, which we call NCCAP-R, consists of NCCAP neurons outside the c929-Gal4 pattern. Because suppression of synaptic transmission and protein kinase A (PKA) activity throughout NCCAP, but not in NCCAP-c929, also blocks tanning and wing expansion, we conclude that neurotransmission and PKA are required in NCCAP-R to regulate bursicon secretion from NCCAP-c929. Enhancement of electrical activity in NCCAP-R by expression of the bacterial sodium channel NaChBac also blocks tanning and wing expansion and leads to depletion of bursicon from central processes. NaChBac expression in NCCAP-c929 is without effect, suggesting that the abdominal bursicon-secreting neurons are likely to be silent until stimulated to release the hormone. Our results suggest that NCCAP form an interacting neuronal network responsible for the regulation and release of bursicon and suggest a model in which PKA-mediated stimulation of inputs to normally quiescent bursicon-expressing neurons activates release of the hormone.

  9. A network of cis and trans interactions is required for ParB spreading

    Science.gov (United States)

    Song, Dan; Rodrigues, Kristen; Graham, Thomas G.W.

    2017-01-01

    Abstract Most bacteria utilize the highly conserved parABS partitioning system in plasmid and chromosome segregation. This system depends on a DNA-binding protein ParB, which binds specifically to the centromere DNA sequence parS and to adjacent non-specific DNA over multiple kilobases in a phenomenon called spreading. Previous single-molecule experiments in combination with genetic, biochemical and computational studies have argued that ParB spreading requires cooperative interactions between ParB dimers including DNA bridging and possible nearest-neighbor interactions. A recent structure of a ParB homolog co-crystallized with parS revealed that ParB dimers tetramerize to form a higher order nucleoprotein complex. Using this structure as a guide, we systematically ablated a series of proposed intermolecular interactions in the Bacillus subtilis ParB (BsSpo0J) and characterized their effect on spreading using both in vivo and in vitro assays. In particular, we measured DNA compaction mediated by BsSpo0J using a recently developed single-molecule method to simultaneously visualize protein binding on single DNA molecules and changes in DNA conformation without protein labeling. Our results indicate that residues acting as hubs for multiple interactions frequently led to the most severe spreading defects when mutated, and that a network of both cis and trans interactions between ParB dimers is necessary for spreading. PMID:28407103

  10. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    Science.gov (United States)

    Kevin McCluskey; Katharine B. Barker; Hazel A. Barton; Kyria Boundy-Mills; Daniel R. Brown; Jonathan A. Coddington; Kevin Cook; Philippe Desmeth; David Geiser; Jessie A. Glaeser; Stephanie Greene; Seogchan Kang; Michael W. Lomas; Ulrich Melcher; Scott E. Miller; David R. Nobles; Kristina J. Owens; Jerome H. Reichman; Manuela da Silva; John Wertz; Cale Whitworth; David Smith; Steven E. Lindow

    2017-01-01

    The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD). The meeting included representatives...

  11. The US Culture Collection Network responding to the requirements of the Nagoya Protocol on Access and Benefit Sharing

    Science.gov (United States)

    The US Culture Collection Network held a meeting to share information about how collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Bio...

  12. Models and Tabu Search Metaheuristics for Service Network Design with Asset-Balance Requirements

    DEFF Research Database (Denmark)

    Pedersen, Michael Berliner; Crainic, T.G.; Madsen, Oli B.G.

    2009-01-01

    design model, a generalization of the capacitated multicommodity network design model generally used in service network design applications. Both arc-and cycle-based formulations for the new model are presented. The paper also proposes a tabu search metaheuristic framework for the arc-based formulation...

  13. Dendritic nonlinearities reduce network size requirements and mediate ON and OFF states of persistent activity in a PFC microcircuit model.

    Directory of Open Access Journals (Sweden)

    Athanasia Papoutsi

    2014-07-01

    Full Text Available Technological advances have unraveled the existence of small clusters of co-active neurons in the neocortex. The functional implications of these microcircuits are in large part unexplored. Using a heavily constrained biophysical model of a L5 PFC microcircuit, we recently showed that these structures act as tunable modules of persistent activity, the cellular correlate of working memory. Here, we investigate the mechanisms that underlie persistent activity emergence (ON and termination (OFF and search for the minimum network size required for expressing these states within physiological regimes. We show that (a NMDA-mediated dendritic spikes gate the induction of persistent firing in the microcircuit. (b The minimum network size required for persistent activity induction is inversely proportional to the synaptic drive of each excitatory neuron. (c Relaxation of connectivity and synaptic delay constraints eliminates the gating effect of NMDA spikes, albeit at a cost of much larger networks. (d Persistent activity termination by increased inhibition depends on the strength of the synaptic input and is negatively modulated by dADP. (e Slow synaptic mechanisms and network activity contain predictive information regarding the ability of a given stimulus to turn ON and/or OFF persistent firing in the microcircuit model. Overall, this study zooms out from dendrites to cell assemblies and suggests a tight interaction between dendritic non-linearities and network properties (size/connectivity that may facilitate the short-memory function of the PFC.

  14. Effects of Energy Storage Systems Grid Code Requirements on Interface Protection Performances in Low Voltage Networks

    National Research Council Canada - National Science Library

    Fabio Bignucolo; Alberto Cerretti; Massimiliano Coppo; Andrea Savio; Roberto Turri

    2017-01-01

    ...), with negative impact on the safety of medium voltage (MV) and low voltage (LV) systems. With the scope of preserving the main network stability, international and national grid connection codes have been updated recently...

  15. Grand Challenges: Science, Engineering, and Societal Advances, Requiring Networking and Information Technology Research and Development

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — ...the U.S. Government makes critical decisions about appropriate investments in IT R and D to help society forward both socially and economically. To inform that...

  16. WIRELESS SENSOR NETWORKS – ARCHITECTURE, SECURITY REQUIREMENTS, SECURITY THREATS AND ITS COUNTERMEASURES

    OpenAIRE

    Ranjit Panigrahi; Kalpana Sharma; Ghose, M. K.

    2013-01-01

    Wireless Sensor Network (WSN) has a huge range of applications such as battlefield, surveillance, emergency rescue operation and smart home technology etc. Apart from its inherent constraints such as limited memory and energy resources, when deployed in hostile environmental conditions, the sensor nodes are vulnerable to physical capture and other security constraints. These constraints put security as a major challenge for the researchers in the field of computer networking. T...

  17. Actin-myosin network is required for proper assembly of influenza virus particles

    Energy Technology Data Exchange (ETDEWEB)

    Kumakura, Michiko; Kawaguchi, Atsushi, E-mail: ats-kawaguchi@md.tsukuba.ac.jp; Nagata, Kyosuke, E-mail: knagata@md.tsukuba.ac.jp

    2015-02-15

    Actin filaments are known to play a central role in cellular dynamics. After polymerization of actin, various actin-crosslinking proteins including non-muscle myosin II facilitate the formation of spatially organized actin filament networks. The actin-myosin network is highly expanded beneath plasma membrane. The genome of influenza virus (vRNA) replicates in the cell nucleus. Then, newly synthesized vRNAs are nuclear-exported to the cytoplasm as ribonucleoprotein complexes (vRNPs), followed by transport to the beneath plasma membrane where virus particles assemble. Here, we found that, by inhibiting actin-myosin network formation, the virus titer tends to be reduced and HA viral spike protein is aggregated on the plasma membrane. These results indicate that the actin-myosin network plays an important role in the virus formation. - Highlights: • Actin-myosin network is important for the influenza virus production. • HA forms aggregations at the plasma membrane in the presence of blebbistatin. • M1 is recruited to the budding site through the actin-myosin network.

  18. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement.

    Science.gov (United States)

    Ferri, Giovane Lopes; Chaves, Gisele de Lorena Diniz; Ribeiro, Glaydston Mattos

    2015-06-01

    This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the characteristic of social vulnerability, must be included in the system. In addition to the theoretical contribution to the reverse logistics network problem, this study aids in decision-making for public managers who have limited technical and administrative capacities for the management of solid wastes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Difference in the distribution pattern of substrate enzymes in the metabolic network of Escherichia coli, according to chaperonin requirement.

    Science.gov (United States)

    Takemoto, Kazuhiro; Niwa, Tatsuya; Taguchi, Hideki

    2011-06-24

    Chaperonins are important in living systems because they play a role in the folding of proteins. Earlier comprehensive analyses identified substrate proteins for which folding requires the chaperonin GroEL/GroES (GroE) in Escherichia coli, and they revealed that many chaperonin substrates are metabolic enzymes. This result implies the importance of chaperonins in metabolism. However, the relationship between chaperonins and metabolism is still unclear. We investigated the distribution of chaperonin substrate enzymes in the metabolic network using network analysis techniques as a first step towards revealing this relationship, and found that as chaperonin requirement increases, substrate enzymes are more laterally distributed in the metabolic. In addition, comparative genome analysis showed that the chaperonin-dependent substrates were less conserved, suggesting that these substrates were acquired later on in evolutionary history. This result implies the expansion of metabolic networks due to this chaperonin, and it supports the existing hypothesis of acceleration of evolution by chaperonins. The distribution of chaperonin substrate enzymes in the metabolic network is inexplicable because it does not seem to be associated with individual protein features such as protein abundance, which has been observed characteristically in chaperonin substrates in previous works. However, it becomes clear by considering this expansion process due to chaperonin. This finding provides new insights into metabolic evolution and the roles of chaperonins in living systems.

  20. Difference in the distribution pattern of substrate enzymes in the metabolic network of Escherichia coli, according to chaperonin requirement

    Directory of Open Access Journals (Sweden)

    Niwa Tatsuya

    2011-06-01

    Full Text Available Abstract Background Chaperonins are important in living systems because they play a role in the folding of proteins. Earlier comprehensive analyses identified substrate proteins for which folding requires the chaperonin GroEL/GroES (GroE in Escherichia coli, and they revealed that many chaperonin substrates are metabolic enzymes. This result implies the importance of chaperonins in metabolism. However, the relationship between chaperonins and metabolism is still unclear. Results We investigated the distribution of chaperonin substrate enzymes in the metabolic network using network analysis techniques as a first step towards revealing this relationship, and found that as chaperonin requirement increases, substrate enzymes are more laterally distributed in the metabolic. In addition, comparative genome analysis showed that the chaperonin-dependent substrates were less conserved, suggesting that these substrates were acquired later on in evolutionary history. Conclusions This result implies the expansion of metabolic networks due to this chaperonin, and it supports the existing hypothesis of acceleration of evolution by chaperonins. The distribution of chaperonin substrate enzymes in the metabolic network is inexplicable because it does not seem to be associated with individual protein features such as protein abundance, which has been observed characteristically in chaperonin substrates in previous works. However, it becomes clear by considering this expansion process due to chaperonin. This finding provides new insights into metabolic evolution and the roles of chaperonins in living systems.

  1. Estimation of the Required Modeling Depth for the Simulation of Cable Switching in a Cable-based Network

    DEFF Research Database (Denmark)

    Silva, Filipe Faria Da; Bak, Claus Leth; Balle Holst, Per

    2012-01-01

    . The simulation of electromagnetic transients in cable-based networks requires larger computational effort than in an equivalent overhead-line (OHL)-based network. Therefore, the method is demonstrated for the former, with the cases of OHL-based networks and hybrid cable-OHL networks addressed in a future paper......The simulation of an electromagnetic transient is only as good as the model's data and the level of detail put into the modeling. One parameter with influence in the results is the size of the modeling of the area around the switched-on line. If the area is too small, the results are inaccurate....... If the area is too large, the simulation requires a long period of time and numerical problems are more likely to exist. This paper proposes a method that can be used to estimate the depth of the modeling area using the grid layout, which can be obtained directly from a PSS/E file, or equivalent...

  2. Increased signaling entropy in cancer requires the scale-free property of protein interaction networks

    Science.gov (United States)

    Teschendorff, Andrew E.; Banerji, Christopher R. S.; Severini, Simone; Kuehn, Reimer; Sollich, Peter

    2015-01-01

    One of the key characteristics of cancer cells is an increased phenotypic plasticity, driven by underlying genetic and epigenetic perturbations. However, at a systems-level it is unclear how these perturbations give rise to the observed increased plasticity. Elucidating such systems-level principles is key for an improved understanding of cancer. Recently, it has been shown that signaling entropy, an overall measure of signaling pathway promiscuity, and computable from integrating a sample's gene expression profile with a protein interaction network, correlates with phenotypic plasticity and is increased in cancer compared to normal tissue. Here we develop a computational framework for studying the effects of network perturbations on signaling entropy. We demonstrate that the increased signaling entropy of cancer is driven by two factors: (i) the scale-free (or near scale-free) topology of the interaction network, and (ii) a subtle positive correlation between differential gene expression and node connectivity. Indeed, we show that if protein interaction networks were random graphs, described by Poisson degree distributions, that cancer would generally not exhibit an increased signaling entropy. In summary, this work exposes a deep connection between cancer, signaling entropy and interaction network topology. PMID:25919796

  3. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very...... useful in quantifying disease severity, they require an extensive clinical experience and carry a risk of subjectivity. We explore the opportunity to use in vivo near-infrared (NIR) spectra as an objective and noninvasive method for local disease severity assessment in 31 psoriasis patients in whom...... selected plaques were scored clinically. A partial least squares (PLS) regression model was used to analyze and predict the severity scores on the NIR spectra of psoriatic and uninvolved skin. The correlation between predicted and clinically assigned scores was R=0.94 (RMSE=0.96), suggesting that in vivo...

  4. The Wisdom of Networks: A General Adaptation and Learning Mechanism of Complex Systems: The Network Core Triggers Fast Responses to Known Stimuli; Innovations Require the Slow Network Periphery and Are Encoded by Core-Remodeling.

    Science.gov (United States)

    Csermely, Peter

    2018-01-01

    I hypothesize that re-occurring prior experience of complex systems mobilizes a fast response, whose attractor is encoded by their strongly connected network core. In contrast, responses to novel stimuli are often slow and require the weakly connected network periphery. Upon repeated stimulus, peripheral network nodes remodel the network core that encodes the attractor of the new response. This "core-periphery learning" theory reviews and generalizes the heretofore fragmented knowledge on attractor formation by neural networks, periphery-driven innovation, and a number of recent reports on the adaptation of protein, neuronal, and social networks. The core-periphery learning theory may increase our understanding of signaling, memory formation, information encoding and decision-making processes. Moreover, the power of network periphery-related "wisdom of crowds" inventing creative, novel responses indicates that deliberative democracy is a slow yet efficient learning strategy developed as the success of a billion-year evolution. Also see the video abstract here: https://youtu.be/IIjP7zWGjVE. © 2017 WILEY Periodicals, Inc.

  5. Analysis of internal network requirements for the distributed Nordic Tier-1

    DEFF Research Database (Denmark)

    Behrmann, G.; Fischer, L.; Gamst, Mette

    2010-01-01

    build from resources under the control of a number of different national organisations. Being physically distributed makes the design and implementation of the networking infrastructure a challenge. NDGF has its own internal OPN connecting the sites participating in the distributed Tier-1. To assess...... the suitability of the network design and the capacity of the links, we present a model of the internal bandwidth needs for the NDGF Tier-1 and its associated Tier-2 sites. The model takes the different type of workloads into account and can handle different kinds of data management strategies. It has already...... been used to dimension the internal network structure of NDGF. We also compare the model with real life data measurements....

  6. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    Agricultural Research Service 2011), which aim to improve consistency of field measurement and data collection for soil carbon sequestration and soil nitrous oxide fluxes. Often these national-level activity data and emissions factors are the basis for regional and smaller-scale applications. Such data are used for model-based estimates of changes in GHGs at a project or regional level (Olander et al 2011). To complement national data for regional-, landscape-, or field-level applications, new data are often collected through farmer knowledge or records and field sampling. Ideally such data could be collected in a standardized manner, perhaps through some type of crowd sourcing model to improve regional—and national—level data, as well as to improve consistency of locally collected data. Data can also be collected by companies working with agricultural suppliers and in country networks, within efforts aimed at understanding firm and product (supply-chain) sustainability and risks (FAO 2009). Such data may feed into various certification processes or reporting requirements from buyers. Unfortunately, this data is likely proprietary. A new process is needed to aggregate and share private data in a way that would not be a competitive concern so such data could complement or supplement national data and add value. A number of papers in this focus issue discuss issues surrounding quantification methods and systems at large scales, global and national levels, while others explore landscape- and field-scale approaches. A few explore the intersection of top-down and bottom-up data measurement and modeling approaches. 5. The agricultural greenhouse gas quantification project and ERL focus issue Important land management decisions are often made with poor or few data, especially in developing countries. Current systems for quantifying GHG emissions are inadequate in most low-income countries, due to a lack of funding, human resources, and infrastructure. Most non-Annex 1 countries

  7. Wireless Sensor Network for Helicopter Rotor Blade Vibration Monitoring: Requirements Definition and Technological Aspects

    NARCIS (Netherlands)

    Sanchez Ramirez, Andrea; Das, Kallol; Loendersloot, Richard; Tinga, Tiedo; Havinga, Paul J.M.; Basu, Biswajit

    The main rotor accounts for the largest vibration source for a helicopter fuselage and its components. However, accurate blade monitoring has been limited due to the practical restrictions on instrumenting rotating blades. The use of Wireless Sensor Networks (WSNs) for real time vibration monitoring

  8. Precision requirements for single-layer feed-forward neural networks

    NARCIS (Netherlands)

    Annema, Anne J.; Hoen, K.; Hoen, Klaas; Wallinga, Hans

    1994-01-01

    This paper presents a mathematical analysis of the effect of limited precision analog hardware for weight adaptation to be used in on-chip learning feedforward neural networks. Easy-to-read equations and simple worst-case estimations for the maximum tolerable imprecision are presented. As an

  9. Parvalbumin-expressing interneurons coordinate hippocampal network dynamics required for memory consolidation

    Science.gov (United States)

    Ognjanovski, Nicolette; Schaeffer, Samantha; Wu, Jiaxing; Mofakham, Sima; Maruyama, Daniel; Zochowski, Michal; Aton, Sara J.

    2017-04-01

    Activity in hippocampal area CA1 is essential for consolidating episodic memories, but it is unclear how CA1 activity patterns drive memory formation. We find that in the hours following single-trial contextual fear conditioning (CFC), fast-spiking interneurons (which typically express parvalbumin (PV)) show greater firing coherence with CA1 network oscillations. Post-CFC inhibition of PV+ interneurons blocks fear memory consolidation. This effect is associated with loss of two network changes associated with normal consolidation: (1) augmented sleep-associated delta (0.5-4 Hz), theta (4-12 Hz) and ripple (150-250 Hz) oscillations; and (2) stabilization of CA1 neurons' functional connectivity patterns. Rhythmic activation of PV+ interneurons increases CA1 network coherence and leads to a sustained increase in the strength and stability of functional connections between neurons. Our results suggest that immediately following learning, PV+ interneurons drive CA1 oscillations and reactivation of CA1 ensembles, which directly promotes network plasticity and long-term memory formation.

  10. Identification and quantification of industrial grade glycerol adulteration in red wine with fourier transform infrared spectroscopy using chemometrics and artificial neural networks.

    Science.gov (United States)

    Dixit, Vivechana; Tewari, Jagdish C; Cho, Byoung-Kwan; Irudayaraj, Joseph M K

    2005-12-01

    Fourier transform infrared (FT-IR) single bounce micro-attenuated total reflectance (mATR) spectroscopy, combined with multivariate and artificial neural network (ANN) data analysis, was used to determine the adulteration of industrial grade glycerol in selected red wines. Red wine samples were artificially adulterated with industrial grade glycerol over the concentration range from 0.1 to 15% and calibration models were developed and validated. Single bounce infrared spectra of glycerol adulterated wine samples were recorded in the fingerprint mid-infrared region, 900-1500 cm(-1). Partial least squares (PLS) and PLS first derivatives were used for quantitative analysis (r2 = 0.945 to 0.998), while linear discriminant analysis (LDA) and canonical variate analysis (CVA) were used for classification and discrimination. The standard error of prediction (SEP) in the validation set was between 1.44 and 2.25%. Classification of glycerol adulterants in the different brands of red wine using CVA resulted in a classification accuracy in the range between 94 and 98%. Artificial neural network analysis based on the quick back propagation network (BPN) and the radial basis function network (RBFN) algorithms had classification success rates of 93% using BPN and 100% using RBFN. The genetic algorithm network was able to predict the concentrations of glycerol in wine up to an accuracy of r2 = 0.998.

  11. High-dose aspirin is required to influence plasma fibrin network structure in patients with type 1 diabetes.

    Science.gov (United States)

    Tehrani, Sara; Antovic, Aleksandra; Mobarrez, Fariborz; Mageed, Koteiba; Lins, Per-Eric; Adamson, Ulf; Wallén, Håkan N; Jörneskog, Gun

    2012-02-01

    Patients with type 1 diabetes form a less permeable fibrin network, which could contribute to their increased risk of cardiovascular disease (CVD). Low-dose aspirin treatment is the standard in the management of CVD; however, the effect seems reduced in patients with diabetes. We investigated the effects of low- and high-dose aspirin treatment on fibrin network formation in patients with type 1 diabetes (primary aim) and the possible interaction between the treatment effects of aspirin on fibrin network permeability and glycemic control in these patients (secondary aim). Forty-eight patients (24 subjects with good [HbA(1c) 8.4%] glycemic control) were randomly assigned to treatment with 75 or 320 mg/day aspirin during 4 weeks in a crossover fashion. A 4-week washout period separated the treatment periods. The plasma fibrin network was assessed by determination of the permeability coefficient (K(s)). Treatment with 75 mg aspirin did not influence fibrin network permeability (K(s)). However, K(s) increased significantly during treatment with 320 mg aspirin (P = 0.004), and a significant treatment effect was seen compared with treatment with 75 mg aspirin (P = 0.009). The increase in K(s) during high-dose aspirin treatment was significant in patients with poor glycemic control (P = 0.02), whereas K(s) only tended to increase in patients with good glycemic control (P = 0.06). A high dose of aspirin is required to influence fibrin network permeability in patients with type 1 diabetes. The observed lack of effect with low-dose aspirin may contribute to aspirin treatment failure in diabetes.

  12. Reverse logistics network for municipal solid waste management: The inclusion of waste pickers as a Brazilian legal requirement

    Energy Technology Data Exchange (ETDEWEB)

    Ferri, Giovane Lopes, E-mail: giovane.ferri@aluno.ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Diniz Chaves, Gisele de Lorena, E-mail: gisele.chaves@ufes.br [Department of Engineering and Technology, Federal University of Espírito Santo – UFES, Rodovia BR 101 Norte, Km 60, Bairro Litorâneo, São Mateus, ES, 29.932-540 (Brazil); Ribeiro, Glaydston Mattos, E-mail: glaydston@pet.coppe.ufrj.br [Transportation Engineering Programme, Federal University of Rio de Janeiro – UFRJ, Centro de Tecnologia, Bloco H, Sala 106, Cidade Universitária, Rio de Janeiro, 21949-900 (Brazil)

    2015-06-15

    Highlights: • We propose a reverse logistics network for MSW involving waste pickers. • A generic facility location mathematical model was validated in a Brazilian city. • The results enable to predict the capacity for screening and storage centres (SSC). • We minimise the costs for transporting MSW with screening and storage centres. • The use of SSC can be a potential source of revenue and a better use of MSW. - Abstract: This study proposes a reverse logistics network involved in the management of municipal solid waste (MSW) to solve the challenge of economically managing these wastes considering the recent legal requirements of the Brazilian Waste Management Policy. The feasibility of the allocation of MSW material recovery facilities (MRF) as intermediate points between the generators of these wastes and the options for reuse and disposal was evaluated, as well as the participation of associations and cooperatives of waste pickers. This network was mathematically modelled and validated through a scenario analysis of the municipality of São Mateus, which makes the location model more complete and applicable in practice. The mathematical model allows the determination of the number of facilities required for the reverse logistics network, their location, capacities, and product flows between these facilities. The fixed costs of installation and operation of the proposed MRF were balanced with the reduction of transport costs, allowing the inclusion of waste pickers to the reverse logistics network. The main contribution of this study lies in the proposition of a reverse logistics network for MSW simultaneously involving legal, environmental, economic and social criteria, which is a very complex goal. This study can guide practices in other countries that have realities similar to those in Brazil of accelerated urbanisation without adequate planning for solid waste management, added to the strong presence of waste pickers that, through the

  13. Future Defense Communications Agency Network Requirements in Support of Department of Defense Office Automation.

    Science.gov (United States)

    1981-02-01

    Society [1] and The Network Nation [2]. Specifically, the report attempts to foresee trends in present and developing technologies which may impact on...projections of technology are probably valid for only about five years, based upon what exists now at the forefront (11. Martin, J, The Wired Society , Prcntice...FUNDS TRANSFER Electronic Funds Transfor (EFT) refers to the concept of a checkless, cashless transaction mechanism by which funds are transferred

  14. Physiological and public health basis for assessing micronutrient requirements in children and adolescents. The EURRECA network

    NARCIS (Netherlands)

    Iglesia, I.; Doets, E.L.; Bel-Serrat, S.; Roman, B.; Hermoso, M.; Quintana, X.; Rosario Garcia-Luzardo, Del M.; Santana-Salguero, B.; Garcia-Santos, Y.; Vucic, V.; Frost Andersen, L.; Perez-Rodrigo, C.; Aranceta, J.; Cavelaars, A.J.E.M.; Decsi, T.; Serra-Majem, L.; Gurinovic, M.; Cetin, I.; Koletzko, B.; Moreno, L.A.

    2010-01-01

    This paper provides an overview of the current knowledge relating to the nutritional requirements and corresponding recommended nutrient intake values of children and adolescents for micronutrients and specificities related to these requirements in the course of childhood and adolescence in Europe.

  15. The nutritional requirements of infants. Towards EU alignment of reference values: the EURRECA network

    NARCIS (Netherlands)

    Hermoso, M.; Tabacchi, G.; Iglesia-Altaba, I.; Bel-Serrat, S.; Moreno-Aznar, L.A.; Garcia-Santos, Y.; Rosario Garcia-Luzardo, Del M.; Santana-Salguero, B.; Pena-Quintana, L.; Serra-Majem, L.; Hall Moran, V.; Dykes, F.; Decsi, T.; Benetou, V.; Plada, M.; Trichopoulou, A.; Raats, M.M.; Doets, E.L.; Berti, C.; Cetin, I.; Koletzko, B.

    2010-01-01

    This paper presents a review of the current knowledge regarding the macro- and micronutrient requirements of infants and discusses issues related to these requirements during the first year of life. The paper also reviews the current reference values used in European countries and the methodological

  16. Requirements of the integration of renewable energy into network charge regulation. Proposals for the further development of the network charge system. Final report; Anforderungen der Integration der erneuerbaren Energien an die Netzentgeltregulierung. Vorschlaege zur Weiterentwicklung des Netzentgeltsystems. Endbericht

    Energy Technology Data Exchange (ETDEWEB)

    Friedrichsen, Nele; Klobasa, Marian; Marwitz, Simon [Fraunhofer-Institut fuer System- und Innovationsforschung (ISI), Karlsruhe (Germany); Hilpert, Johannes; Sailer, Frank [Stiftung Umweltenergierecht, Wuerzburg (Germany)

    2016-11-15

    In this project we analyzed options to advance the network tariff system to support the German energy transition. A power system with high shares of renewables, requires more flexibility of supply and demand than the traditional system based on centralized, fossil power plants. Further, the power networks need to be adjusted and expanded. The transformation should aim at system efficiency i.e. look at both generation and network development. Network tariffs allocate the network cost towards network users. They also should provide incentives, e.g. to reduce peak load in periods of network congestion. Inappropriate network tariffs can hinder the provision of flexibility and thereby become a barrier towards system integration of renewable. Against this background, this report presents a systematic review of the German network tariff system and a discussion of several options to adapt the network tarif system in order to support the energy transition. The following aspects are analyzed: An adjustment of the privileges for industrial users to increase potential network benefits and reduce barriers towards a more market oriented behaviour. The payments for avoided network charges to distributed generation, that do not reflect cost reality in distribution networks anymore. Uniform transmission network tariffs as an option for a more appropriate allocation of cost associated with the energy transition. Increased standing fees in low voltage networks as an option to increase the cost-contribution of users with self-generation to network financing. Generator tariffs, to allocate a share of network cost to generators and provide incentives for network oriented location choice and/or feed-in.

  17. Data governance requirements for distributed clinical research networks: triangulating perspectives of diverse stakeholders

    National Research Council Canada - National Science Library

    Kim, Katherine K; Browe, Dennis K; Logan, Holly C; Holm, Roberta; Hack, Lori; Ohno-Machado, Lucila

    2014-01-01

    .... This paper reports on a triangulated approach to developing DRN data governance requirements based on a combination of policy analysis with experts, interviews with institutional leaders, and patient focus groups...

  18. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  19. An Improved Technique Using Dental Prostheses for Field Quantification of the Force Required by Primates for the Dental Penetration of Fruit.

    Science.gov (United States)

    Barnett, Adrian A; Santos, Paulo J P; Boyle, Sarah A; Bezerra, Bruna M

    2015-01-01

    Tooth morphology is an important determinant of primate diet, setting potential limits on processable item size and material properties. Plunger-based commercial fruit firmness testers (penetrometers) have been used to estimate primate diet item hardness and, by proxy, bite force required for penetration. However, geometric forms and surface areas of penetrometer plungers and primate teeth differ considerably. Accurate bite force estimation is especially important with pitheciine primates as these penetrate fruit pericarps with their canines. To achieve more realistic bite force measures, we replaced a fruit penetrometer's standard plunger with a Cacajao calvus canine prosthesis. We compared indentation and penetration values for Hevea spruceana (Euphorbiaceae; hard-pericarp) and Mauritia flexuosa (Arecaceae; soft-pericarp) fruits (both natural Cacajao foods), and standard penetrometer head and canine prosthesis values for penetrating H. spruceana sulci. Compared to the canine prosthesis, a standard head overestimated the force needed to indent and penetrate H. spruceana fruit by more than twofold and, due to greater width, could not effectively penetrate a sulcus: sulcal penetrability data were easily retrieved with the canine prosthesis. We believe this new approach using dental prostheses has potential in the analysis of primate foraging mechanisms, especially for pitheciines for which canines are of paramount importance in accessing food. © 2015 S. Karger AG, Basel.

  20. Generalizing cell segmentation and quantification.

    Science.gov (United States)

    Wang, Zhenzhou; Li, Haixing

    2017-03-23

    In recent years, the microscopy technology for imaging cells has developed greatly and rapidly. The accompanying requirements for automatic segmentation and quantification of the imaged cells are becoming more and more. After studied widely in both scientific research and industrial applications for many decades, cell segmentation has achieved great progress, especially in segmenting some specific types of cells, e.g. muscle cells. However, it lacks a framework to address the cell segmentation problems generally. On the contrary, different segmentation methods were proposed to address the different types of cells, which makes the research work divergent. In addition, most of the popular segmentation and quantification tools usually require a great part of manual work. To make the cell segmentation work more convergent, we propose a framework that is able to segment different kinds of cells automatically and robustly in this paper. This framework evolves the previously proposed method in segmenting the muscle cells and generalizes it to be suitable for segmenting and quantifying a variety of cell images by adding more union cases. Compared to the previous methods, the segmentation and quantification accuracy of the proposed framework is also improved by three novel procedures: (1) a simplified calibration method is proposed and added for the threshold selection process; (2) a noise blob filter is proposed to get rid of the noise blobs. (3) a boundary smoothing filter is proposed to reduce the false seeds produced by the iterative erosion. As it turned out, the quantification accuracy of the proposed framework increases from 93.4 to 96.8% compared to the previous method. In addition, the accuracy of the proposed framework is also better in quantifying the muscle cells than two available state-of-the-art methods. The proposed framework is able to automatically segment and quantify more types of cells than state-of-the-art methods.

  1. IT Knowledge Requirements Identification in Organizational Networks: Cooperation Between Industrial Organizations and Universities

    Science.gov (United States)

    Rudzajs, Peteris; Kirikova, Marite

    ICT professionals face rapid technology development, changes in design paradigms, methodologies, approaches, and cooperation patterns. These changes impact relationships between universities that teach ICT disciplines and industrial organizations that develop and use ICT-based products. The required knowledge and skills of university graduates depend mainly on the current industrial situation; therefore the university graduates have to meet industry requirements which are stated at the time point of their graduation, not at the start of their studies. Continuous cooperation between universities and industrial organizations is needed to identify a time and situation-dependent set of knowledge requirements, which lead to situation aware, industry acknowledged, balanced and productive ICT study programs. This chapter proposes information systems solutions supporting cooperation between the university and the industrial organizations with respect to curriculum development in ICT area.

  2. Game Theory Meets Wireless Sensor Networks Security Requirements and Threats Mitigation: A Survey

    Directory of Open Access Journals (Sweden)

    Mohamed S. Abdalzaher

    2016-06-01

    Full Text Available We present a study of using game theory for protecting wireless sensor networks (WSNs from selfish behavior or malicious nodes. Due to scalability, low complexity and disseminated nature of WSNs, malicious attacks can be modeled effectively using game theory. In this study, we survey the different game-theoretic defense strategies for WSNs. We present a taxonomy of the game theory approaches based on the nature of the attack, whether it is caused by an external attacker or it is the result of an internal node acting selfishly or maliciously. We also present a general trust model using game theory for decision making. We, finally, identify the significant role of evolutionary games for WSNs security against intelligent attacks; then, we list several prospect applications of game theory to enhance the data trustworthiness and node cooperation in different WSNs.

  3. Game Theory Meets Wireless Sensor Networks Security Requirements and Threats Mitigation: A Survey.

    Science.gov (United States)

    Abdalzaher, Mohamed S; Seddik, Karim; Elsabrouty, Maha; Muta, Osamu; Furukawa, Hiroshi; Abdel-Rahman, Adel

    2016-06-29

    We present a study of using game theory for protecting wireless sensor networks (WSNs) from selfish behavior or malicious nodes. Due to scalability, low complexity and disseminated nature of WSNs, malicious attacks can be modeled effectively using game theory. In this study, we survey the different game-theoretic defense strategies for WSNs. We present a taxonomy of the game theory approaches based on the nature of the attack, whether it is caused by an external attacker or it is the result of an internal node acting selfishly or maliciously. We also present a general trust model using game theory for decision making. We, finally, identify the significant role of evolutionary games for WSNs security against intelligent attacks; then, we list several prospect applications of game theory to enhance the data trustworthiness and node cooperation in different WSNs.

  4. Game Theory Meets Wireless Sensor Networks Security Requirements and Threats Mitigation: A Survey

    Science.gov (United States)

    Abdalzaher, Mohamed S.; Seddik, Karim; Elsabrouty, Maha; Muta, Osamu; Furukawa, Hiroshi; Abdel-Rahman, Adel

    2016-01-01

    We present a study of using game theory for protecting wireless sensor networks (WSNs) from selfish behavior or malicious nodes. Due to scalability, low complexity and disseminated nature of WSNs, malicious attacks can be modeled effectively using game theory. In this study, we survey the different game-theoretic defense strategies for WSNs. We present a taxonomy of the game theory approaches based on the nature of the attack, whether it is caused by an external attacker or it is the result of an internal node acting selfishly or maliciously. We also present a general trust model using game theory for decision making. We, finally, identify the significant role of evolutionary games for WSNs security against intelligent attacks; then, we list several prospect applications of game theory to enhance the data trustworthiness and node cooperation in different WSNs. PMID:27367700

  5. Using a Bayesian network to clarify areas requiring research in a host-pathogen system.

    Science.gov (United States)

    Bower, D S; Mengersen, K; Alford, R A; Schwarzkopf, L

    2017-12-01

    Bayesian network analyses can be used to interactively change the strength of effect of variables in a model to explore complex relationships in new ways. In doing so, they allow one to identify influential nodes that are not well studied empirically so that future research can be prioritized. We identified relationships in host and pathogen biology to examine disease-driven declines of amphibians associated with amphibian chytrid fungus (Batrachochytrium dendrobatidis). We constructed a Bayesian network consisting of behavioral, genetic, physiological, and environmental variables that influence disease and used them to predict host population trends. We varied the impacts of specific variables in the model to reveal factors with the most influence on host population trend. The behavior of the nodes (the way in which the variables probabilistically responded to changes in states of the parents, which are the nodes or variables that directly influenced them in the graphical model) was consistent with published results. The frog population had a 49% probability of decline when all states were set at their original values, and this probability increased when body temperatures were cold, the immune system was not suppressing infection, and the ambient environment was conducive to growth of B. dendrobatidis. These findings suggest the construction of our model reflected the complex relationships characteristic of host-pathogen interactions. Changes to climatic variables alone did not strongly influence the probability of population decline, which suggests that climate interacts with other factors such as the capacity of the frog immune system to suppress disease. Changes to the adaptive immune system and disease reservoirs had a large effect on the population trend, but there was little empirical information available for model construction. Our model inputs can be used as a base to examine other systems, and our results show that such analyses are useful tools for

  6. ASSESSMENT OF REQUIREMENT OF THE POPULATION IN THE ORGAN TRANSPLANTATION, THE DONOR RESOURCE AND PLANNING OF THE EFFECTIVE NETWORK OF THE MEDICAL ORGANIZATIONS (THE CENTERS OF TRANSPLANTATION)

    National Research Council Canada - National Science Library

    S. V. Gautier; S. M. Khomyakov

    2013-01-01

    Aim. To estimate the requirement of the population of the Russian Federation for an organ transplantation and donor resource, to offer approach to planning of an effective network of the medical organizations...

  7. Laser linewidth requirements and improvements for coherent optical beam forming networks in satellites

    DEFF Research Database (Denmark)

    Gliese, Ulrik Bo; Christensen, Erik Lintz; Stubkjær, Kristian

    1991-01-01

    of the lasers in a satellite transmitter and the phase error at the detector of a microwave differential quaternary phase-shift keying earth station receiver is analyzed. The demands placed on the linewidths from the point of view of phase stability requirements are calculated using quaternary phase...

  8. What is 5G? Emerging 5G Mobile Services and Network Requirements

    OpenAIRE

    Heejung Yu; Howon Lee; Hongbeom Jeon

    2017-01-01

    In this paper, emerging 5G mobile services are investigated and categorized from the perspective of not service providers, but end-users. The development of 5G mobile services is based on an intensive analysis of the global trends in mobile services. Additionally, several indispensable service requirements, essential for realizing service scenarios presented, are described. To illustrate the changes in societies and in daily life in the 5G era, five megatrends, including the explosion of mobi...

  9. Drosophila spastin regulates synaptic microtubule networks and is required for normal motor function.

    Directory of Open Access Journals (Sweden)

    Nina Tang Sherwood

    2004-12-01

    Full Text Available The most common form of human autosomal dominant hereditary spastic paraplegia (AD-HSP is caused by mutations in the SPG4 (spastin gene, which encodes an AAA ATPase closely related in sequence to the microtubule-severing protein Katanin. Patients with AD-HSP exhibit degeneration of the distal regions of the longest axons in the spinal cord. Loss-of-function mutations in the Drosophila spastin gene produce larval neuromuscular junction (NMJ phenotypes. NMJ synaptic boutons in spastin mutants are more numerous and more clustered than in wild-type, and transmitter release is impaired. spastin-null adult flies have severe movement defects. They do not fly or jump, they climb poorly, and they have short lifespans. spastin hypomorphs have weaker behavioral phenotypes. Overexpression of Spastin erases the muscle microtubule network. This gain-of-function phenotype is consistent with the hypothesis that Spastin has microtubule-severing activity, and implies that spastin loss-of-function mutants should have an increased number of microtubules. Surprisingly, however, we observed the opposite phenotype: in spastin-null mutants, there are fewer microtubule bundles within the NMJ, especially in its distal boutons. The Drosophila NMJ is a glutamatergic synapse that resembles excitatory synapses in the mammalian spinal cord, so the reduction of organized presynaptic microtubules that we observe in spastin mutants may be relevant to an understanding of human Spastin's role in maintenance of axon terminals in the spinal cord.

  10. The ChIP-seq-defined networks of Bcl-3 gene binding support its required role in skeletal muscle atrophy.

    Directory of Open Access Journals (Sweden)

    Robert W Jackman

    Full Text Available NF-kappaB transcriptional activation is required for skeletal muscle disuse atrophy. We are continuing to study how the activation of NF-kB regulates the genes that encode the protein products that cause atrophy. Using ChIP-sequencing we found that Bcl-3, an NF-kB transcriptional activator required for atrophy, binds to the promoters of a number of genes whose collective function describes two major aspects of muscle wasting. By means of bioinformatics analysis of ChIP-sequencing data we found Bcl-3 to be directing transcription networks of proteolysis and energy metabolism. The proteolytic arm of the Bcl-3 networks includes many E3 ligases associated with proteasomal protein degradation, including that of the N-end rule pathway. The metabolic arm appears to be involved in organizing the change from oxidative phosphorylation to glycolysis in atrophying muscle. For one gene, MuRF1, ChIP-sequencing data identified the location of Bcl-3 and p50 binding in the promoter region which directed the creation of deletant and base-substitution mutations of MuRF1 promoter constructs to determine the effect on gene transcription. The results provide the first direct confirmation that the NF-kB binding site is involved in the muscle unloading regulation of MuRF1. Finally, we have combined the ChIP-sequencing results with gene expression microarray data from unloaded muscle to map several direct targets of Bcl-3 that are transcription factors whose own targets describe a set of indirect targets for NF-kB in atrophy. ChIP-sequencing provides the first molecular explanation for the finding that Bcl3 knockout mice are resistant to disuse muscle atrophy. Mapping the transcriptional regulation of muscle atrophy requires an unbiased analysis of the whole genome, which we show is now possible with ChIP-sequencing.

  11. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing.

    Science.gov (United States)

    McCluskey, Kevin; Barker, Katharine B; Barton, Hazel A; Boundy-Mills, Kyria; Brown, Daniel R; Coddington, Jonathan A; Cook, Kevin; Desmeth, Philippe; Geiser, David; Glaeser, Jessie A; Greene, Stephanie; Kang, Seogchan; Lomas, Michael W; Melcher, Ulrich; Miller, Scott E; Nobles, David R; Owens, Kristina J; Reichman, Jerome H; da Silva, Manuela; Wertz, John; Whitworth, Cale; Smith, David

    2017-08-15

    The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD). The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol.

  12. The U.S. Culture Collection Network Responding to the Requirements of the Nagoya Protocol on Access and Benefit Sharing

    Directory of Open Access Journals (Sweden)

    Kevin McCluskey

    2017-08-01

    Full Text Available The U.S. Culture Collection Network held a meeting to share information about how culture collections are responding to the requirements of the recently enacted Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity (CBD. The meeting included representatives of many culture collections and other biological collections, the U.S. Department of State, U.S. Department of Agriculture, Secretariat of the CBD, interested scientific societies, and collection groups, including Scientific Collections International and the Global Genome Biodiversity Network. The participants learned about the policies of the United States and other countries regarding access to genetic resources, the definition of genetic resources, and the status of historical materials and genetic sequence information. Key topics included what constitutes access and how the CBD Access and Benefit-Sharing Clearing-House can help guide researchers through the process of obtaining Prior Informed Consent on Mutually Agreed Terms. U.S. scientists and their international collaborators are required to follow the regulations of other countries when working with microbes originally isolated outside the United States, and the local regulations required by the Nagoya Protocol vary by the country of origin of the genetic resource. Managers of diverse living collections in the United States described their holdings and their efforts to provide access to genetic resources. This meeting laid the foundation for cooperation in establishing a set of standard operating procedures for U.S. and international culture collections in response to the Nagoya Protocol.

  13. Integrated approach for quantification of fractured tight reservoir rocks: Porosity, permeability analyses and 3D fracture network characterisation on fractured dolomite samples

    Science.gov (United States)

    Voorn, Maarten; Barnhoorn, Auke; Exner, Ulrike; Baud, Patrick; Reuschlé, Thierry

    2015-04-01

    Fractured reservoir rocks make up an important part of the hydrocarbon reservoirs worldwide. A detailed analysis of fractures and fracture networks in reservoir rock samples is thus essential to determine the potential of these fractured reservoirs. However, common analyses on drill core and plug samples taken from such reservoirs (including hand specimen analysis, thin section analysis and laboratory porosity and permeability determination) suffer from various problems, such as having a limited resolution, providing only 2D and no internal structure information, being destructive on the samples and/or not being representative for full fracture networks. In this study, we therefore explore the use of an additional method - non-destructive 3D X-ray micro-Computed Tomography (μCT) - to obtain more information on such fractured samples. Seven plug-sized samples were selected from narrowly fractured rocks of the Hauptdolomit formation, taken from wellbores in the Vienna Basin, Austria. These samples span a range of different fault rocks in a fault zone interpretation, from damage zone to fault core. 3D μCT data is used to extract porosity, fracture aperture, fracture density and fracture orientations - in bulk as well as locally. The 3D analyses are complemented with thin sections made to provide some 2D information with a much higher detail than the μCT data. Finally, gas- and water permeability measurements under confining pressure provide an important link (at least in order of magnitude) of the µCT results towards more realistic reservoir conditions. Our results show that 3D μCT can be applied efficiently on plug-sized samples of naturally fractured rocks, and that several important parameters can be extracted. μCT can therefore be a useful addition to studies on such reservoir rocks, and provide valuable input for modelling and simulations. Also permeability experiments under confining pressure provide important additional insights. Combining these and other

  14. Charging and billing in modern communications networks : A comprehensive survey of the state of art and future requirements

    NARCIS (Netherlands)

    Kuehne, Ralph; Huitema, George; Carle, George

    2012-01-01

    In mobile telecommunication networks the trend for an increasing heterogeneity of access networks, the convergence with fixed networks as well as with the Internet are apparent. The resulting future converged network with an expected wide variety of services and a possibly stiff competition between

  15. Charging and billing in modern communications networks : A comprehensive survey of the state of the art and future requirements

    NARCIS (Netherlands)

    Kühne, R.; Huitema, G.B.; Carle, G.

    2012-01-01

    In mobile telecommunication networks the trend for an increasing heterogeneity of access networks, the convergence with fixed networks as well as with the Internet are apparent. The resulting future converged network with an expected wide variety of services and a possibly stiff competition between

  16. Integration of hormonal signaling networks and mobile microRNAs is required for vascular patterning in Arabidopsis roots.

    Science.gov (United States)

    Muraro, Daniele; Mellor, Nathan; Pound, Michael P; Help, Hanna; Lucas, Mikaël; Chopard, Jérôme; Byrne, Helen M; Godin, Christophe; Hodgman, T Charlie; King, John R; Pridmore, Tony P; Helariutta, Ykä; Bennett, Malcolm J; Bishopp, Anthony

    2014-01-14

    As multicellular organisms grow, positional information is continually needed to regulate the pattern in which cells are arranged. In the Arabidopsis root, most cell types are organized in a radially symmetric pattern; however, a symmetry-breaking event generates bisymmetric auxin and cytokinin signaling domains in the stele. Bidirectional cross-talk between the stele and the surrounding tissues involving a mobile transcription factor, SHORT ROOT (SHR), and mobile microRNA species also determines vascular pattern, but it is currently unclear how these signals integrate. We use a multicellular model to determine a minimal set of components necessary for maintaining a stable vascular pattern. Simulations perturbing the signaling network show that, in addition to the mutually inhibitory interaction between auxin and cytokinin, signaling through SHR, microRNA165/6, and PHABULOSA is required to maintain a stable bisymmetric pattern. We have verified this prediction by observing loss of bisymmetry in shr mutants. The model reveals the importance of several features of the network, namely the mutual degradation of microRNA165/6 and PHABULOSA and the existence of an additional negative regulator of cytokinin signaling. These components form a plausible mechanism capable of patterning vascular tissues in the absence of positional inputs provided by the transport of hormones from the shoot.

  17. Integration of hormonal signaling networks and mobile microRNAs is required for vascular patterning in Arabidopsis roots

    KAUST Repository

    Muraro, D.

    2013-12-31

    As multicellular organisms grow, positional information is continually needed to regulate the pattern in which cells are arranged. In the Arabidopsis root, most cell types are organized in a radially symmetric pattern; however, a symmetry-breaking event generates bisymmetric auxin and cytokinin signaling domains in the stele. Bidirectional cross-talk between the stele and the surrounding tissues involving a mobile transcription factor, SHORT ROOT (SHR), and mobile microRNA species also determines vascular pattern, but it is currently unclear how these signals integrate. We use a multicellular model to determine a minimal set of components necessary for maintaining a stable vascular pattern. Simulations perturbing the signaling network show that, in addition to the mutually inhibitory interaction between auxin and cytokinin, signaling through SHR, microRNA165/6, and PHABULOSA is required to maintain a stable bisymmetric pattern. We have verified this prediction by observing loss of bisymmetry in shr mutants. The model reveals the importance of several features of the network, namely the mutual degradation of microRNA165/6 and PHABULOSA and the existence of an additional negative regulator of cytokinin signaling. These components form a plausible mechanism capable of patterning vascular tissues in the absence of positional inputs provided by the transport of hormones from the shoot.

  18. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets.

    Science.gov (United States)

    Levering, Jennifer; Fiedler, Tomas; Sieg, Antje; van Grinsven, Koen W A; Hering, Silvio; Veith, Nadine; Olivier, Brett G; Klett, Lara; Hugenholtz, Jeroen; Teusink, Bas; Kreikemeyer, Bernd; Kummer, Ursula

    2016-08-20

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes M49. Initially, we based the reconstruction on genome annotations and already existing and curated metabolic networks of Bacillus subtilis, Escherichia coli, Lactobacillus plantarum and Lactococcus lactis. This initial draft was manually curated with the final reconstruction accounting for 480 genes associated with 576 reactions and 558 metabolites. In order to constrain the model further, we performed growth experiments of wild type and arcA deletion strains of S. pyogenes M49 in a chemically defined medium and calculated nutrient uptake and production fluxes. We additionally performed amino acid auxotrophy experiments to test the consistency of the model. The established genome-scale model can be used to understand the growth requirements of the human pathogen S. pyogenes and define optimal and suboptimal conditions, but also to describe differences and similarities between S. pyogenes and related lactic acid bacteria such as L. lactis in order to find strategies to reduce the growth of the pathogen and propose drug targets. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Magnetic Flux Leakage Sensing and Artificial Neural Network Pattern Recognition-Based Automated Damage Detection and Quantification for Wire Rope Non-Destructive Evaluation.

    Science.gov (United States)

    Kim, Ju-Won; Park, Seunghee

    2018-01-02

    In this study, a magnetic flux leakage (MFL) method, known to be a suitable non-destructive evaluation (NDE) method for continuum ferromagnetic structures, was used to detect local damage when inspecting steel wire ropes. To demonstrate the proposed damage detection method through experiments, a multi-channel MFL sensor head was fabricated using a Hall sensor array and magnetic yokes to adapt to the wire rope. To prepare the damaged wire-rope specimens, several different amounts of artificial damages were inflicted on wire ropes. The MFL sensor head was used to scan the damaged specimens to measure the magnetic flux signals. After obtaining the signals, a series of signal processing steps, including the enveloping process based on the Hilbert transform (HT), was performed to better recognize the MFL signals by reducing the unexpected noise. The enveloped signals were then analyzed for objective damage detection by comparing them with a threshold that was established based on the generalized extreme value (GEV) distribution. The detected MFL signals that exceed the threshold were analyzed quantitatively by extracting the magnetic features from the MFL signals. To improve the quantitative analysis, damage indexes based on the relationship between the enveloped MFL signal and the threshold value were also utilized, along with a general damage index for the MFL method. The detected MFL signals for each damage type were quantified by using the proposed damage indexes and the general damage indexes for the MFL method. Finally, an artificial neural network (ANN) based multi-stage pattern recognition method using extracted multi-scale damage indexes was implemented to automatically estimate the severity of the damage. To analyze the reliability of the MFL-based automated wire rope NDE method, the accuracy and reliability were evaluated by comparing the repeatedly estimated damage size and the actual damage size.

  20. Uncertainty quantification of inflow boundary condition and proximal arterial stiffness-coupled effect on pulse wave propagation in a vascular network.

    Science.gov (United States)

    Brault, Antoine; Dumas, Laurent; Lucor, Didier

    2017-10-01

    This work aims at quantifying the effect of inherent uncertainties from cardiac output on the sensitivity of a human compliant arterial network response based on stochastic simulations of a reduced-order pulse wave propagation model. A simple pulsatile output form is used to reproduce the most relevant cardiac features with a minimum number of parameters associated with left ventricle dynamics. Another source of significant uncertainty is the spatial heterogeneity of the aortic compliance, which plays a key role in the propagation and damping of pulse waves generated at each cardiac cycle. A continuous representation of the aortic stiffness in the form of a generic random field of prescribed spatial correlation is then considered. Making use of a stochastic sparse pseudospectral method, we investigate the sensitivity of the pulse pressure and waves reflection magnitude over the arterial tree with respect to the different model uncertainties. Results indicate that uncertainties related to the shape and magnitude of the prescribed inlet flow in the proximal aorta can lead to potent variation of both the mean value and standard deviation of blood flow velocity and pressure dynamics due to the interaction of different wave propagation and reflection features. Lack of accurate knowledge in the stiffness properties of the aorta, resulting in uncertainty in the pulse wave velocity in that region, strongly modifies the statistical response, with a global increase in the variability of the quantities of interest and a spatial redistribution of the regions of higher sensitivity. These results will provide some guidance in clinical data acquisition and future coupling of arterial pulse wave propagation reduced-order model with more complex beating heart models. Copyright © 2016 John Wiley & Sons, Ltd.

  1. Training a Neural Network Via Large-Eddy Simulation for Autonomous Location and Quantification of CH4 Leaks at Natural Gas Facilities

    Science.gov (United States)

    Sauer, J.; Travis, B. J.; Munoz-Esparza, D.; Dubey, M. K.

    2015-12-01

    Fugitive methane (CH4) leaks from oil and gas production fields are a potential significant source of atmospheric methane. US DOE's ARPA-E MONITOR program is supporting research to locate and quantify fugitive methane leaks at natural gas facilities in order to achieve a 90% reduction in CH4 emissions. LANL, Aeris and Rice University are developing an LDS (leak detection system) that employs a compact laser absorption methane sensor and sonic anemometer coupled to an artificial neural network (ANN)-based source attribution algorithm. LANL's large-eddy simulation model, HIGRAD, provides high-fidelity simulated wind fields and turbulent CH4 plume dispersion data for various scenarios used in training the ANN. Numerous inverse solution methodologies have been applied over the last decade to assessment of greenhouse gas emissions. ANN learning is well suited to problems in which the training and observed data are noisy, or correspond to complex sensor data as is typical of meteorological and sensor data over a site. ANNs have been shown to achieve higher accuracy with more efficiency than other inverse modeling approaches in studies at larger scales, in urban environments, over short time scales, and even at small spatial scales for efficient source localization of indoor airborne contaminants. Our ANN is intended to characterize fugitive leaks rapidly, given site-specific, real-time, wind and CH4 concentration time-series data at multiple sensor locations, leading to a minimum time-to-detection and providing a first order improvement with respect to overall minimization of methane loss. Initial studies with the ANN on a variety of source location, sensor location, and meteorological condition scenarios are presented and discussed.

  2. Thermosetting polyimide resin matrix composites with interpenetrating polymer networks for precision foil resistor chips based on special mechanical performance requirements

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y., E-mail: wxy@tju.edu.cn [School of Electronic Information Engineering, Tianjin University, Tianjin 300072 (China); Ma, J.X.; Li, C.G. [School of Electronic Information Engineering, Tianjin University, Tianjin 300072 (China); Wang, H.X. [ZHENGHE electronics Co., Ltd, Jining 272023 (China)

    2014-04-01

    Highlights: • Macromolecular materials were chosen to modify thermosetting polyimide (TSPI). • The formation of IPN structure in TSPI composite polymers was discussed. • The special mechanical properties required were the main study object. • The desired candidate materials should have proper hardness and toughness. • The specific mechanical data are quantitatively determined by experiments. - Abstract: Based on interpenetrating networks (IPNs) different macromolecular materials such as epoxy, phenolic, and silicone resin were chosen to modify thermosetting polyimide (TSPI) resin to solve the lack of performance when used for protecting precision foil resistor chips. Copolymerization modification, controlled at curing stage, was used to prepare TSPI composites considering both performance and process requirements. The mechanical properties related to trimming process were mainly studied due to the special requirements of the regularity of scratch edges caused by a tungsten needle. The analysis on scratch edges reveals that the generation and propagation of microcracks caused by scratching together with crack closure effect may lead to regular scratch traces. Experiments show that the elongation at break of TSPI composites is the main reason that determines the special mechanical properties. The desired candidate materials should have proper hardness and toughness, and the specific mechanical data are that the mean elongation at break and tensile strength of polymer materials are in the range of 9.2–10.4% and 100–107 MPa, respectively. Possible reasons for the effect of the modifiers chosen on TSPI polymers, the reaction mechanisms on modified TSPI resin and the IPN structure in TSPI composite polymers were discussed based on IR and TG analysis.

  3. Introduction to uncertainty quantification

    CERN Document Server

    Sullivan, T J

    2015-01-01

    Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...

  4. Rapid quantification of DNA libraries for next-generation sequencing.

    Science.gov (United States)

    Buehler, Bernd; Hogrefe, Holly H; Scott, Graham; Ravi, Harini; Pabón-Peña, Carlos; O'Brien, Scott; Formosa, Rachel; Happe, Scott

    2010-04-01

    The next-generation DNA sequencing workflows require an accurate quantification of the DNA molecules to be sequenced which assures optimal performance of the instrument. Here, we demonstrate the use of qPCR for quantification of DNA libraries used in next-generation sequencing. In addition, we find that qPCR quantification may allow improvements to current NGS workflows, including reducing the amount of library DNA required, increasing the accuracy in quantifying amplifiable DNA, and avoiding amplification bias by reducing or eliminating the need to amplify DNA before sequencing. Copyright 2010. Published by Elsevier Inc.

  5. The ABORTED MICROSPORES Regulatory Network Is Required for Postmeiotic Male Reproductive Development in Arabidopsis thaliana[W][OA

    Science.gov (United States)

    Xu, Jie; Yang, Caiyun; Yuan, Zheng; Zhang, Dasheng; Gondwe, Martha Y.; Ding, Zhiwen; Liang, Wanqi; Zhang, Dabing; Wilson, Zoe A.

    2010-01-01

    The Arabidopsis thaliana ABORTED MICROSPORES (AMS) gene encodes a basic helix-loop-helix (bHLH) transcription factor that is required for tapetal cell development and postmeiotic microspore formation. However, the regulatory role of AMS in anther and pollen development has not been fully defined. Here, we show by microarray analysis that the expression of 549 anther-expressed genes was altered in ams buds and that these genes are associated with tapetal function and pollen wall formation. We demonstrate that AMS has the ability to bind in vitro to DNA containing a 6-bp consensus motif, CANNTG. Moreover, 13 genes involved in transportation of lipids, oligopeptides, and ions, fatty acid synthesis and metabolism, flavonol accumulation, substrate oxidation, methyl-modification, and pectin dynamics were identified as direct targets of AMS by chromatin immunoprecipitation. The functional importance of the AMS regulatory pathway was further demonstrated by analysis of an insertional mutant of one of these downstream AMS targets, an ABC transporter, White-Brown Complex homolog, which fails to undergo pollen development and is male sterile. Yeast two-hybrid screens and pull-down assays revealed that AMS has the ability to interact with two bHLH proteins (AtbHLH089 and AtbHLH091) and the ATA20 protein. These results provide insight into the regulatory role of the AMS network during anther development. PMID:20118226

  6. Quantification of micro stickies

    Science.gov (United States)

    Mahendra. Doshi; Jeffrey. Dyer; Salman. Aziz; Kristine. Jackson; Said M. Abubakr

    1997-01-01

    The objective of this project was to compare the different methods for the quantification of micro stickies. The hydrophobic materials investigated in this project for the collection of micro stickies were Microfoam* (polypropylene packing material), low density polyethylene film (LDPE), high density polyethylene (HDPE; a flat piece from a square plastic bottle), paper...

  7. Meeting the future metro network challenges and requirements by adopting programmable S-BVT with direct-detection and PDM functionality

    Science.gov (United States)

    Nadal, Laia; Svaluto Moreolo, Michela; Fàbrega, Josep M.; Vílchez, F. Javier

    2017-07-01

    In this paper, we propose an advanced programmable sliceable-bandwidth variable transceiver (S-BVT) with polarization division multiplexing (PDM) capability as a key enabler to fulfill the requirements for future 5G networks. Thanks to its cost-effective optoelectronic front-end based on orthogonal frequency division multiplexing (OFDM) technology and direct-detection (DD), the proposed S-BVT becomes suitable for next generation highly flexible and scalable metro networks. Polarization beam splitters (PBSs) and controllers (PCs), available on-demand, are included at the transceivers and at the network nodes, further enhancing the system flexibility and promoting an efficient use of the spectrum. 40G-100G PDM transmission has been experimentally demonstrated, within a 4-node photonic mesh network (ADRENALINE testbed), implementing a simplified equalization process.

  8. Dead wood in managed forests: how much and how much is enough?: development of a snag-quantification method by remote sensing & GIS and snag targets based on Three-toed woodpeckers' habitat requirements

    OpenAIRE

    Bütler Sauvain, Rita; Schlaepfer, Rodolphe

    2005-01-01

    The aims of this research were twofold: to develop an efficient method for the quantification of large spruce snags (standing dying and dead trees), and to establish snag target values for sustainable forest management. We answer the two basic questions: how much dead wood is currently available in managed forests? And how much dead wood is enough for biodiversity conservation? It is widely accepted that modern forest management has to be sustainable. One generally recognised criterion of sus...

  9. Fluorescent quantification of melanin

    OpenAIRE

    Fernandes, Bruno Pacheco; Matamá, Maria Teresa; Guimarães, Diana Isabel Pereira; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-01-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Theref...

  10. Genome-scale reconstruction of the Streptococcus pyogenes M49 metabolic network reveals growth requirements and indicates potential drug targets

    NARCIS (Netherlands)

    Levering, J.; Fiedler, T.; Sieg, A.; van Grinsven, K.W.A.; Hering, S.; Veith, N.; Olivier, B.G.; Klett, L.; Hugenholtz, J.; Teusink, B.; Kreikemeyer, B.; Kummer, U.

    2016-01-01

    Genome-scale metabolic models comprise stoichiometric relations between metabolites, as well as associations between genes and metabolic reactions and facilitate the analysis of metabolism. We computationally reconstructed the metabolic network of the lactic acid bacterium Streptococcus pyogenes

  11. High-Dose Aspirin Is Required to Influence Plasma Fibrin Network Structure in Patients With Type 1 Diabetes

    OpenAIRE

    Tehrani, Sara; Antovic, Aleksandra; Mobarrez, Fariborz; Mageed, Koteiba; Lins, Per-Eric; Adamson, Ulf; Wall?n, H?kan N.; J?rneskog, Gun

    2012-01-01

    OBJECTIVE Patients with type 1 diabetes form a less permeable fibrin network, which could contribute to their increased risk of cardiovascular disease (CVD). Low-dose aspirin treatment is the standard in the management of CVD; however, the effect seems reduced in patients with diabetes. We investigated the effects of low- and high-dose aspirin treatment on fibrin network formation in patients with type 1 diabetes (primary aim) and the possible interaction between the treatment effects of aspi...

  12. A network of networks.

    Science.gov (United States)

    Iedema, Rick; Verma, Raj; Wutzke, Sonia; Lyons, Nigel; McCaughan, Brian

    2017-04-10

    Purpose To further our insight into the role of networks in health system reform, the purpose of this paper is to investigate how one agency, the NSW Agency for Clinical Innovation (ACI), and the multiple networks and enabling resources that it encompasses, govern, manage and extend the potential of networks for healthcare practice improvement. Design/methodology/approach This is a case study investigation which took place over ten months through the first author's participation in network activities and discussions with the agency's staff about their main objectives, challenges and achievements, and with selected services around the state of New South Wales to understand the agency's implementation and large system transformation activities. Findings The paper demonstrates that ACI accommodates multiple networks whose oversight structures, self-organisation and systems change approaches combined in dynamic ways, effectively yield a diversity of network governances. Further, ACI bears out a paradox of "centralised decentralisation", co-locating agents of innovation with networks of implementation and evaluation expertise. This arrangement strengthens and legitimates the role of the strategic hybrid - the healthcare professional in pursuit of change and improvement, and enhances their influence and impact on the wider system. Research limitations/implications While focussing the case study on one agency only, this study is unique as it highlights inter-network connections. Contributing to the literature on network governance, this paper identifies ACI as a "network of networks" through which resources, expectations and stakeholder dynamics are dynamically and flexibly mediated and enhanced. Practical implications The co-location of and dynamic interaction among clinical networks may create synergies among networks, nurture "strategic hybrids", and enhance the impact of network activities on health system reform. Social implications Network governance requires more

  13. Accessible quantification of multiparticle entanglement

    Science.gov (United States)

    Cianciaruso, Marco; Bromley, Thomas R.; Adesso, Gerardo

    2016-10-01

    Entanglement is a key ingredient for quantum technologies and a fundamental signature of quantumness in a broad range of phenomena encompassing many-body physics, thermodynamics, cosmology and life sciences. For arbitrary multiparticle systems, entanglement quantification typically involves nontrivial optimisation problems, and it may require demanding tomographical techniques. Here, we develop an experimentally feasible approach to the evaluation of geometric measures of multiparticle entanglement. Our framework provides analytical results for particular classes of mixed states of N qubits, and computable lower bounds to global, partial, or genuine multiparticle entanglement of any general state. For global and partial entanglement, useful bounds are obtained with minimum effort, requiring local measurements in just three settings for any N. For genuine entanglement, a number of measurements scaling linearly with N are required. We demonstrate the power of our approach to estimate and quantify different types of multiparticle entanglement in a variety of N-qubit states useful for quantum information processing and recently engineered in laboratories with quantum optics and trapped ion setups.

  14. Telecommunication networks

    CERN Document Server

    Iannone, Eugenio

    2011-01-01

    Many argue that telecommunications network infrastructure is the most impressive and important technology ever developed. Analyzing the telecom market's constantly evolving trends, research directions, infrastructure, and vital needs, Telecommunication Networks responds with revolutionized engineering strategies to optimize network construction. Omnipresent in society, telecom networks integrate a wide range of technologies. These include quantum field theory for the study of optical amplifiers, software architectures for network control, abstract algebra required to design error correction co

  15. Quantification of myocardial perfusion by cardiovascular magnetic resonance.

    Science.gov (United States)

    Jerosch-Herold, Michael

    2010-10-08

    The potential of contrast-enhanced cardiovascular magnetic resonance (CMR) for a quantitative assessment of myocardial perfusion has been explored for more than a decade now, with encouraging results from comparisons with accepted "gold standards", such as microspheres used in the physiology laboratory. This has generated an increasing interest in the requirements and methodological approaches for the non-invasive quantification of myocardial blood flow by CMR. This review provides a synopsis of the current status of the field, and introduces the reader to the technical aspects of perfusion quantification by CMR. The field has reached a stage, where quantification of myocardial perfusion is no longer a claim exclusive to nuclear imaging techniques. CMR may in fact offer important advantages like the absence of ionizing radiation, high spatial resolution, and an unmatched versatility to combine the interrogation of the perfusion status with a comprehensive tissue characterization. Further progress will depend on successful dissemination of the techniques for perfusion quantification among the CMR community.

  16. Making big communities small: using network science to understand the ecological and behavioral requirements for community social capital.

    Science.gov (United States)

    Neal, Zachary

    2015-06-01

    The concept of social capital is becoming increasingly common in community psychology and elsewhere. However, the multiple conceptual and operational definitions of social capital challenge its utility as a theoretical tool. The goals of this paper are to clarify two forms of social capital (bridging and bonding), explicitly link them to the structural characteristics of small world networks, and explore the behavioral and ecological prerequisites of its formation. First, I use the tools of network science and specifically the concept of small-world networks to clarify what patterns of social relationships are likely to facilitate social capital formation. Second, I use an agent-based model to explore how different ecological characteristics (diversity and segregation) and behavioral tendencies (homophily and proximity) impact communities' potential for developing social capital. The results suggest diverse communities have the greatest potential to develop community social capital, and that segregation moderates the effects that the behavioral tendencies of homophily and proximity have on community social capital. The discussion highlights how these findings provide community-based researchers with both a deeper understanding of the contextual constraints with which they must contend, and a useful tool for targeting their efforts in communities with the greatest need or greatest potential.

  17. Future directions in dialysis quantification.

    Science.gov (United States)

    Lindsay, R M; Sternby, J

    2001-01-01

    The influence of dialysis prescription on outcome is well established, and currently the amount of dialysis prescribed is based on small molecular weight toxin removal as represented by the clearance of urea. The "normalized dose of dialysis" (Kt/V(urea)) concept is well established. Most techniques for dialysis quantification require that blood samples be taken at the beginning and after the completion of dialysis. The postdialysis sample, however, gives cause for concern because of the "rebound phenomenon" due to nonuniform distribution of urea among body compartments. Blood samples give "indirect" measures of dialysis quantification. Thus direct urea concentration measurements in dialysate may be superior in urea kinetic modeling and these may be made "real time" during dialysis. It is with real-time monitoring that future advances in dialysis quantification will take place. These will be of two types. The first will analyze blood water or dialysate samples for urea content multiple times throughout the treatment; the second will assess the on-line clearance of urea using surrogate molecules such as sodium chloride, the clearance being determined by conductivity measurements. On-line urea monitoring is based on the action of urease on urea in a water solution and measurement of the resultant ammonium ions, which are measured directly by a specific electrode or indirectly by conductivity changes. Differences in blood-side versus dialysate-side urea monitors exist which reflect the parameters they can provide, but with both, the standard urea kinetic measurements of Kt/V and nPCR (nPNA) are easily obtainable. A range of additional parameters can be derived from dialysate-side monitoring such as "whole-body Kt/V," "pretreatment urea mass" and "whole-body urea clearance," which are worthy of future studies to determine their roles in adequacy assessment. Conductivity clearance measurements are made by examining the conductivity differences between dialysate inlet

  18. ASSESSMENT OF REQUIREMENT OF THE POPULATION IN THE ORGAN TRANSPLANTATION, THE DONOR RESOURCE AND PLANNING OF THE EFFECTIVE NETWORK OF THE MEDICAL ORGANIZATIONS (THE CENTERS OF TRANSPLANTATION

    Directory of Open Access Journals (Sweden)

    S. V. Gautier

    2013-01-01

    Full Text Available Aim. To estimate the requirement of the population of the Russian Federation for an organ transplantation and donor resource, to offer approach to planning of an effective network of the medical organizations (the centers of transplantation. Materials and methods. The analysis and comparison of statistical data on population, number of the patients receiving a dialysis, data about medical care on an organ transplantation in Russia and foreign countries is made. Results. On the basis of what the assessment of requirement of the population of the Russian Federation in an organ transplantation and donor resource is carried out, approach to planning of an effective network of the medical organizations (the centers of transplantation and scenarios of development of organ do- nation and transplantation in Russia is offered. Conclusion. To provide the population of the Russian Federation with medical care on an organ transplantation according to real requirement and donor resource, in each region of the Russian Federation have to be organized deceased organ donation and transplantation of a cadaveric kidney. But the transplantation of extrarenal organs is better to develop in the federal centers of hi-tech medical care with donor providing from territories of adjacent regions. 

  19. Validated method for phytohormone quantification in plants

    Directory of Open Access Journals (Sweden)

    Marilia eAlmeida-Trapp

    2014-08-01

    Full Text Available Phytohormones are long time known as important components of signalling cascades in plant development and plant responses to various abiotic and biotic challenges. Quantifications of phytohormone levels in plants are typically carried out using GC or LC-MS/MS systems, due to their high sensitivity, specificity, and the fact that not much sample preparation is needed. However, mass spectrometer-based analyses are often affected by the particular sample type (different matrices, extraction procedure, and experimental setups, i.e. the chromatographic separation system and/or mass spectrometer analyser (Triple-quadrupole, Iontrap, TOF, Orbitrap. For these reasons, a validated method is required in order to enable comparison of data that are generated in different laboratories, under different experimental set-ups, and in different matrices.So far, many phytohormone quantification studies were done using either QTRAP or Triple-quadrupole mass spectrometers. None of them was performed under the regime of a fully-validated method. Therefore, we developed and established such validated method for quantification of stress-related phytohormones such as jasmonates, abscisic acid, salicylic acid, IAA, in the model plant Arabidopsis thaliana and the fruit crop Citrus sinensis, using an Iontrap mass spectrometer. All parameters recommended by FDA (US Food and Drug Administration or EMEA (European Medicines Evaluation Agency for validation of analytical methods were evaluated: sensitivity, selectivity, repeatability and reproducibility (accuracy and precision.

  20. Quantification of Permafrost Creep by Remote Sensing

    Science.gov (United States)

    Roer, I.; Kaeaeb, A.

    2008-12-01

    Rockglaciers and frozen talus slopes are distinct landforms representing the occurrence of permafrost conditions in high mountain environments. The interpretation of ongoing permafrost creep and its reaction times is still limited due to the complex setting of interrelating processes within the system. Therefore, a detailed monitoring of rockglaciers and frozen talus slopes seems advisable to better understand the system as well as to assess possible consequences like rockfall hazards or debris-flow starting zones. In this context, remote sensing techniques are increasingly important. High accuracy techniques and data with high spatial and temporal resolution are required for the quantification of rockglacier movement. Digital Terrain Models (DTMs) derived from optical stereo, synthetic aperture radar (SAR) or laser scanning data are the most important data sets for the quantification of permafrost-related mass movements. Correlation image analysis of multitemporal orthophotos allow for the quantification of horizontal displacements, while vertical changes in landform geometry are computed by DTM comparisons. In the European Alps the movement of rockglaciers is monitored over a period of several decades by the combined application of remote sensing and geodetic methods. The resulting kinematics (horizontal and vertical displacements) as well as spatio-temporal variations thereof are considered in terms of rheology. The distinct changes in process rates or landform failures - probably related to permafrost degradation - are analysed in combination with data on surface and subsurface temperatures and internal structures (e.g., ice content, unfrozen water content).

  1. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  2. Wrappers, Aspects, Quantification and Events

    Science.gov (United States)

    Filman, Robert E.

    2005-01-01

    Talk overview: Object infrastructure framework (OIF). A system development to simplify building distributed applications by allowing independent implementation of multiple concern. Essence and state of AOP. Trinity. Quantification over events. Current work on a generalized AOP technology.

  3. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  4. Dynamic afferent synapses to decision-making networks improve performance in tasks requiring stimulus associations and discriminations

    Science.gov (United States)

    Bourjaily, Mark A.

    2012-01-01

    Animals must often make opposing responses to similar complex stimuli. Multiple sensory inputs from such stimuli combine to produce stimulus-specific patterns of neural activity. It is the differences between these activity patterns, even when small, that provide the basis for any differences in behavioral response. In the present study, we investigate three tasks with differing degrees of overlap in the inputs, each with just two response possibilities. We simulate behavioral output via winner-takes-all activity in one of two pools of neurons forming a biologically based decision-making layer. The decision-making layer receives inputs either in a direct stimulus-dependent manner or via an intervening recurrent network of neurons that form the associative layer, whose activity helps distinguish the stimuli of each task. We show that synaptic facilitation of synapses to the decision-making layer improves performance in these tasks, robustly increasing accuracy and speed of responses across multiple configurations of network inputs. Conversely, we find that synaptic depression worsens performance. In a linearly nonseparable task with exclusive-or logic, the benefit of synaptic facilitation lies in its superlinear transmission: effective synaptic strength increases with presynaptic firing rate, which enhances the already present superlinearity of presynaptic firing rate as a function of stimulus-dependent input. In linearly separable single-stimulus discrimination tasks, we find that facilitating synapses are always beneficial because synaptic facilitation always enhances any differences between inputs. Thus we predict that for optimal decision-making accuracy and speed, synapses from sensory or associative areas to decision-making or premotor areas should be facilitating. PMID:22457467

  5. Genetic Networks Required to Coordinate Chromosome Replication by DNA Polymerases α, δ, and ε in Saccharomyces cerevisiae.

    Science.gov (United States)

    Dubarry, Marion; Lawless, Conor; Banks, A Peter; Cockell, Simon; Lydall, David

    2015-08-21

    Three major DNA polymerases replicate the linear eukaryotic chromosomes. DNA polymerase α-primase (Pol α) and DNA polymerase δ (Pol δ) replicate the lagging-strand and Pol α and DNA polymerase ε (Pol ε) the leading-strand. To identify factors affecting coordination of DNA replication, we have performed genome-wide quantitative fitness analyses of budding yeast cells containing defective polymerases. We combined temperature-sensitive mutations affecting the three replicative polymerases, Pol α, Pol δ, and Pol ε with genome-wide collections of null and reduced function mutations. We identify large numbers of genetic interactions that inform about the roles that specific genes play to help Pol α, Pol δ, and Pol ε function. Surprisingly, the overlap between the genetic networks affecting the three DNA polymerases does not represent the majority of the genetic interactions identified. Instead our data support a model for division of labor between the different DNA polymerases during DNA replication. For example, our genetic interaction data are consistent with biochemical data showing that Pol ε is more important to the Pre-Loading complex than either Pol α or Pol δ. We also observed distinct patterns of genetic interactions between leading- and lagging-strand DNA polymerases, with particular genes being important for coupling proliferating cell nuclear antigen loading/unloading (Ctf18, Elg1) with nucleosome assembly (chromatin assembly factor 1, histone regulatory HIR complex). Overall our data reveal specialized genetic networks that affect different aspects of leading- and lagging-strand DNA replication. To help others to engage with these data we have generated two novel, interactive visualization tools, DIXY and Profilyzer. Copyright © 2015 Dubarry et al.

  6. Training load quantification in triathlon

    OpenAIRE

    Cejuela Anta, Roberto; Esteve-Lanao, Jonathan

    2011-01-01

    There are different Indices of Training Stress of varying complexity, to quantification Training load. Examples include the training impulse (TRIMP), the session (RPE), Lucia’s TRIMP or Summated Zone Score. But the triathlon, a sport to be combined where there are interactions between different segments, is a complication when it comes to quantify the training. The aim of this paper is to review current methods of quantification, and to propose a scale to quantify the training load in triathl...

  7. Boron-bridged RG-II and calcium are required to maintain the pectin network of the Arabidopsis seed mucilage ultrastructure.

    Science.gov (United States)

    Shi, Da-Chuan; Wang, Juan; Hu, Rui-Bo; Zhou, Gong-Ke; O'Neill, Malcolm A; Kong, Ying-Zhen

    2017-06-01

    The structure of a pectin network requires both calcium (Ca(2+)) and boron (B). Ca(2+) is involved in crosslinking pectic polysaccharides and arbitrarily induces the formation of an "egg-box" structure among pectin molecules, while B crosslinks rhamnogalacturonan II (RG-II) side chain A apiosyl residues in primary cell walls to generate a borate-dimeric-rhamnogalacturonan II (dRG-II-B) complex through a boron-bridge bond, leading to the formation of a pectin network. Based on recent studies of dRG-II-B structures, a hypothesis has been proposed suggesting that Ca(2+)is a common component of the dRG-II-B complex. However, no in vivo evidence has addressed whether B affects the stability of Ca(2+) crosslinks. Here, we investigated the L-fucose-deficient dwarf mutant mur1, which was previously shown to require exogenous B treatment for phenotypic reversion. Imbibed Arabidopsis thaliana seeds release hydrated polysaccharides to form a halo of seed mucilage covering the seed surface, which consists of a water-soluble outer layer and an adherent inner layer. Our study of mur1 seed mucilage has revealed that the pectin in the outer layer of mucilage was relocated to the inner layer. Nevertheless, the mur1 inner mucilage was more vulnerable to rough shaking or ethylene diamine tetraacetic acid (EDTA) extraction than that of the wild type. Immunolabeling analysis suggested that dRG-II-B was severely decreased in mur1 inner mucilage. Moreover, non-methylesterified homogalacturonan (HG) exhibited obvious reassembly in the mur1 inner layer compared with the wild type, which may imply a possible connection between dRG-II-B deficiency and pectin network transformation in the seed mucilage. As expected, the concentration of B in the mur1 inner mucilage was reduced, whereas the distribution and concentration of Ca(2+)in the inner mucilage increased significantly, which could be the reason why pectin relocates from the outer mucilage to the inner mucilage. Consequently, the

  8. Quantification of competitive value of documents

    Directory of Open Access Journals (Sweden)

    Pavel Šimek

    2009-01-01

    Full Text Available The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.

  9. Quantification of land-use dynamics: an illustration from Costa Rica.

    NARCIS (Netherlands)

    Stoorvogel, J.J.; Fresco, L.O.

    1996-01-01

    In many cases, studies dealing with land degradation require the quantification of land-use dynamics. Although research has been carried out to describe land-use dynamics and its driving forces, very little has been done on the recognition of indicators for the quantification of land-use dynamics.

  10. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated....... GPD provides a better estimate of the entire IRF. As the signal-to-noise ratio (SNR) increases or the time resolution of the measurements increases, GPD is shown to be superior to SVD. This is also found for large distribution volumes....

  11. Network Frontier Workshop 2013

    Science.gov (United States)

    2014-11-11

    networks, biological networks, cognitive and semantic networks and social networks. This field has received a major boost caused by the availability of huge...networks, which require new ways of thinking about the world. Part of the new cognition is provided by the fractional calculus description of temporal...structures in a wide range of examples—including road networks in large urban areas, a rabbit warren, a dolphin social network, a European interbank network

  12. Is Your Biobank Up to Standards? A Review of the National Canadian Tissue Repository Network Required Operational Practice Standards and the Controlled Documents of a Certified Biobank.

    Science.gov (United States)

    Hartman, Victoria; Castillo-Pelayo, Tania; Babinszky, Sindy; Dee, Simon; Leblanc, Jodi; Matzke, Lise; O'Donoghue, Sheila; Carpenter, Jane; Carter, Candace; Rush, Amanda; Byrne, Jennifer; Barnes, Rebecca; Mes-Messons, Anne-Marie; Watson, Peter

    2017-11-17

    Ongoing quality management is an essential part of biobank operations and the creation of high quality biospecimen resources. Adhering to the standards of a national biobanking network is a way to reduce variability between individual biobank processes, resulting in cross biobank compatibility and more consistent support for health researchers. The Canadian Tissue Repository Network (CTRNet) implemented a set of required operational practices (ROPs) in 2011 and these serve as the standards and basis for the CTRNet biobank certification program. A review of these 13 ROPs covering 314 directives was conducted after 5 years to identify areas for revision and update, leading to changes to 7/314 directives (2.3%). A review of all internal controlled documents (including policies, standard operating procedures and guides, and forms for actions and processes) used by the BC Cancer Agency's Tumor Tissue Repository (BCCA-TTR) to conform to these ROPs was then conducted. Changes were made to 20/106 (19%) of BCCA-TTR documents. We conclude that a substantial fraction of internal controlled documents require updates at regular intervals to accommodate changes in best practices. Reviewing documentation is an essential aspect of keeping up to date with best practices and ensuring the quality of biospecimens and data managed by biobanks.

  13. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  14. GMO quantification: valuable experience and insights for the future.

    Science.gov (United States)

    Milavec, Mojca; Dobnik, David; Yang, Litao; Zhang, Dabing; Gruden, Kristina; Zel, Jana

    2014-10-01

    Cultivation and marketing of genetically modified organisms (GMOs) have been unevenly adopted worldwide. To facilitate international trade and to provide information to consumers, labelling requirements have been set up in many countries. Quantitative real-time polymerase chain reaction (qPCR) is currently the method of choice for detection, identification and quantification of GMOs. This has been critically assessed and the requirements for the method performance have been set. Nevertheless, there are challenges that should still be highlighted, such as measuring the quantity and quality of DNA, and determining the qPCR efficiency, possible sequence mismatches, characteristics of taxon-specific genes and appropriate units of measurement, as these remain potential sources of measurement uncertainty. To overcome these problems and to cope with the continuous increase in the number and variety of GMOs, new approaches are needed. Statistical strategies of quantification have already been proposed and expanded with the development of digital PCR. The first attempts have been made to use new generation sequencing also for quantitative purposes, although accurate quantification of the contents of GMOs using this technology is still a challenge for the future, and especially for mixed samples. New approaches are needed also for the quantification of stacks, and for potential quantification of organisms produced by new plant breeding techniques.

  15. Networked Microgrids Scoping Study

    Energy Technology Data Exchange (ETDEWEB)

    Starke, Michael R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-01

    While the utilization of a microgrid for local power reliability during grid outage and emergencies is a well-known benefit, the integration of microgrids with the broader electrical distribution system will allow for seamless interaction with distribution system operations, contributing to resource and economic optimization, enhanced reliability and resiliency, and improved power quality. By virtue of integration with the distribution system, multiple microgrids should be networked and collectively known as networked microgrids. As a follow-up to the work conducted by Oak Ridge National Laboratory on a microgrid controller [the Complete System-level Efficient and Interoperable Solution for Microgrid Integrated Controls (CSEISMIC)], the main goal of this work is to identify the next steps for bringing microgrid research to the utility industry, particularly as a resource for enhancing efficiency, reliability, and resilience. Various R&D needs for the integration of microgrids into the distribution system have been proposed, including interconnection types, communications, control architectures, quantification of benefits, functional requirements, and various operational issues.

  16. Jasmonoyl-L-isoleucine coordinates metabolic networks required for anthesis and floral attractant emission in wild tobacco (Nicotiana attenuata).

    Science.gov (United States)

    Stitz, Michael; Hartl, Markus; Baldwin, Ian T; Gaquerel, Emmanuel

    2014-10-01

    Jasmonic acid and its derivatives (jasmonates [JAs]) play central roles in floral development and maturation. The binding of jasmonoyl-L-isoleucine (JA-Ile) to the F-box of CORONATINE INSENSITIVE1 (COI1) is required for many JA-dependent physiological responses, but its role in anthesis and pollinator attraction traits remains largely unexplored. Here, we used the wild tobacco Nicotiana attenuata, which develops sympetalous flowers with complex pollination biology, to examine the coordinating function of JA homeostasis in the distinct metabolic processes that underlie flower maturation, opening, and advertisement to pollinators. From combined transcriptomic, targeted metabolic, and allometric analyses of transgenic N. attenuata plants for which signaling deficiencies were complemented with methyl jasmonate, JA-Ile, and its functional homolog, coronatine (COR), we demonstrate that (1) JA-Ile/COR-based signaling regulates corolla limb opening and a JA-negative feedback loop; (2) production of floral volatiles (night emissions of benzylacetone) and nectar requires JA-Ile/COR perception through COI1; and (3) limb expansion involves JA-Ile-induced changes in limb fresh mass and carbohydrate metabolism. These findings demonstrate a master regulatory function of the JA-Ile/COI1 duet for the main function of a sympetalous corolla, that of advertising for and rewarding pollinator services. Flower opening, by contrast, requires JA-Ile signaling-dependent changes in primary metabolism, which are not compromised in the COI1-silenced RNA interference line used in this study. © 2014 American Society of Plant Biologists. All rights reserved.

  17. Accessibility in complex networks

    Science.gov (United States)

    Travençolo, B. A. N.; da F. Costa, L.

    2008-12-01

    This Letter describes a method for the quantification of the diversity of non-linear dynamics in complex networks as a consequence of self-avoiding random walks. The methodology is analyzed in the context of theoretical models and illustrated with respect to the characterization of the accessibility in urban streets.

  18. Actin remodeling by ADF/cofilin is required for cargo sorting at the trans-Golgi network.

    Science.gov (United States)

    von Blume, Julia; Duran, Juan M; Forlanelli, Elena; Alleaume, Anne-Marie; Egorov, Mikhail; Polishchuk, Roman; Molina, Henrik; Malhotra, Vivek

    2009-12-28

    Knockdown of the actin-severing protein actin-depolymerizing factor (ADF)/cofilin inhibited export of an exogenously expressed soluble secretory protein from Golgi membranes in Drosophila melanogaster and mammalian tissue culture cells. A stable isotope labeling by amino acids in cell culture mass spectrometry-based protein profiling revealed that a large number of endogenous secretory proteins in mammalian cells were not secreted upon ADF/cofilin knockdown. Although many secretory proteins were retained, a Golgi-resident protein and a lysosomal hydrolase were aberrantly secreted upon ADF/cofilin knockdown. Overall, our findings indicate that inactivation of ADF/cofilin perturbed the sorting of a subset of both soluble and integral membrane proteins at the trans-Golgi network (TGN). We suggest that ADF/cofilin-dependent actin trimming generates a sorting domain at the TGN, which filters secretory cargo for export, and that uncontrolled growth of this domain causes missorting of proteins. This type of actin-dependent compartmentalization and filtering of secretory cargo at the TGN by ADF/cofilin could explain sorting of proteins that are destined to the cell surface.

  19. Quantification of Microbial Phenotypes

    Science.gov (United States)

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  20. Arrays of microLEDs and astrocytes: biological amplifiers to optogenetically modulate neuronal networks reducing light requirement.

    Directory of Open Access Journals (Sweden)

    Rolando Berlinguer-Palmini

    Full Text Available In the modern view of synaptic transmission, astrocytes are no longer confined to the role of merely supportive cells. Although they do not generate action potentials, they nonetheless exhibit electrical activity and can influence surrounding neurons through gliotransmitter release. In this work, we explored whether optogenetic activation of glial cells could act as an amplification mechanism to optical neural stimulation via gliotransmission to the neural network. We studied the modulation of gliotransmission by selective photo-activation of channelrhodopsin-2 (ChR2 and by means of a matrix of individually addressable super-bright microLEDs (μLEDs with an excitation peak at 470 nm. We combined Ca2+ imaging techniques and concurrent patch-clamp electrophysiology to obtain subsequent glia/neural activity. First, we tested the μLEDs efficacy in stimulating ChR2-transfected astrocyte. ChR2-induced astrocytic current did not desensitize overtime, and was linearly increased and prolonged by increasing μLED irradiance in terms of intensity and surface illumination. Subsequently, ChR2 astrocytic stimulation by broad-field LED illumination with the same spectral profile, increased both glial cells and neuronal calcium transient frequency and sEPSCs suggesting that few ChR2-transfected astrocytes were able to excite surrounding not-ChR2-transfected astrocytes and neurons. Finally, by using the μLEDs array to selectively light stimulate ChR2 positive astrocytes we were able to increase the synaptic activity of single neurons surrounding it. In conclusion, ChR2-transfected astrocytes and μLEDs system were shown to be an amplifier of synaptic activity in mixed corticalneuronal and glial cells culture.

  1. [Prospective qualification requirements in nursing care. Results and conclusions of the BMBF research network FreQueNz].

    Science.gov (United States)

    Schüler, G; Klaes, L; Rommel, A; Schröder, H; Köhler, T

    2013-08-01

    Demographic change, advances in medicine, and innovative health care services are leading to changes in the professional qualification requirements for nursing and care staff. Detecting future trends in relation to these requirements was the focus of a Delphi study developed as part of the BMBF FreQueNz initiative. After qualitative expert interviews, data collection was organized in three consecutive steps, with 243 interviews realized in the second wave. It was found that home care will further diversify in the fields of supporting and counseling services as well as in palliative care, resulting in the necessary expansion of specific qualifications (e.g., intensive care). Moreover, there will be an increased need for interprofessional, intersectoral, and intercultural coordination and communication skills. As a consequence of the delegation of medical tasks, new duties for nonmedical professions in inpatient and outpatient care will also arise. For instance, qualifications need to be tailored to the new demands of assessment, diagnostics, therapy, and patient education and they should take into account evidence-based knowledge as well as clinical practice guidelines. Consequently, the system of care professionals will further diversify through advanced training programs and the continued academization of nursing.

  2. Damage Localization and Quantification of Earthquake Excited RC-Frames

    DEFF Research Database (Denmark)

    Skjærbæk, P.S.; Nielsen, Søren R.K.; Kirkegaard, Poul Henning

    In the paper a recently proposed method for damage localization and quantification of RC-structures from response measurements is tested on experimental data. The method investigated requires at least one response measurement along the structure and the ground surface acceleration. Further, the t...

  3. Embedded XML DOM Parser: An Approach for XML Data Processing on Networked Embedded Systems with Real-Time Requirements

    Directory of Open Access Journals (Sweden)

    Cavia Soto MAngeles

    2008-01-01

    Full Text Available Abstract Trends in control and automation show an increase in data processing and communication in embedded automation controllers. The eXtensible Markup Language (XML is emerging as a dominant data syntax, fostering interoperability, yet little is still known about how to provide predictable real-time performance in XML processing, as required in the domain of industrial automation. This paper presents an XML processor that is designed with such real-time performance in mind. The publication attempts to disclose insight gained in applying techniques such as object pooling and reuse, and other methods targeted at avoiding dynamic memory allocation and its consequent memory fragmentation. Benchmarking tests are reported in order to illustrate the benefits of the approach.

  4. CCN2 is required for the TGF-β induced activation of Smad1-Erk1/2 signaling network.

    Directory of Open Access Journals (Sweden)

    Sashidhar S Nakerakanti

    Full Text Available Connective tissue growth factor (CCN2 is a multifunctional matricellular protein, which is frequently overexpressed during organ fibrosis. CCN2 is a mediator of the pro-fibrotic effects of TGF-β in cultured cells, but the specific function of CCN2 in the fibrotic process has not been elucidated. In this study we characterized the CCN2-dependent signaling pathways that are required for the TGF-β induced fibrogenic response. By depleting endogenous CCN2 we show that CCN2 is indispensable for the TGF-β-induced phosphorylation of Smad1 and Erk1/2, but it is unnecessary for the activation of Smad3. TGF-β stimulation triggered formation of the CCN2/β(3 integrin protein complexes and activation of Src signaling. Furthermore, we demonstrated that signaling through the α(vβ(3 integrin receptor and Src was required for the TGF-β induced Smad1 phosphorylation. Recombinant CCN2 activated Src and Erk1/2 signaling, and induced phosphorylation of Fli1, but was unable to stimulate Smad1 or Smad3 phosphorylation. Additional experiments were performed to investigate the role of CCN2 in collagen production. Consistent with the previous studies, blockade of CCN2 abrogated TGF-β-induced collagen mRNA and protein levels. Recombinant CCN2 potently stimulated collagen mRNA levels and upregulated activity of the COL1A2 promoter, however CCN2 was a weak inducer of collagen protein levels. CCN2 stimulation of collagen was dose-dependent with the lower doses (<50 ng/ml having a stimulatory effect and higher doses having an inhibitory effect on collagen gene expression. In conclusion, our study defines a novel CCN2/α(vβ(3 integrin/Src/Smad1 axis that contributes to the pro-fibrotic TGF-β signaling and suggests that blockade of this pathway may be beneficial for the treatment of fibrosis.

  5. Uncertainty Quantification for Safety Verification Applications in Nuclear Power Plants

    Science.gov (United States)

    Boafo, Emmanuel

    There is an increasing interest in computational reactor safety analysis to systematically replace the conservative calculations by best estimate calculations augmented by quantitative uncertainty analysis methods. This has been necessitated by recent regulatory requirements that have permitted the use of such methods in reactor safety analysis. Stochastic uncertainty quantification methods have shown great promise, as they are better suited to capture the complexities in real engineering problems. This study proposes a framework for performing uncertainty quantification based on the stochastic approach, which can be applied to enhance safety analysis. (Abstract shortened by ProQuest.).

  6. Iron overload in the liver diagnostic and quantification

    Energy Technology Data Exchange (ETDEWEB)

    Alustiza, Jose M. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)]. E-mail: jmalustiza@osatek.es; Castiella, Agustin [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Juan, Maria D. de [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Emparanza, Jose I. [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Artetxe, Jose [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain); Uranga, Maite [Osatek SA, P Dr. Beguiristain 109, 20014, San Sebastian, Guipuzcoa (Spain)

    2007-03-15

    Hereditary Hemochromatosis is the most frequent modality of iron overload. Since 1996 genetic tests have facilitated significantly the non-invasive diagnosis of the disease. There are however many cases of negative genetic tests that require confirmation by hepatic iron quantification which is traditionally performed by hepatic biopsy. There are many studies that have demonstrated the possibility of performing hepatic iron quantification with Magnetic Resonance. However, a consensus has not been reached yet regarding the technique or the possibility to reproduce the same method of calculus in different machines. This article reviews the state of the art of the question and delineates possible future lines to standardise this non-invasive method of hepatic iron quantification.

  7. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN)

    DEFF Research Database (Denmark)

    Kubiak, Christine; de Andres-Trelles, Fernando; Kuchinke, Wolfgang

    2009-01-01

    with cell therapy, etc.); diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.); and epidemiology studies. Our classification......BACKGROUND: Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. METHODS: In order to summarise the current situation...... in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN) developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching...

  8. AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid

    Directory of Open Access Journals (Sweden)

    Jongbin Ko

    2014-01-01

    Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  9. AVQS: attack route-based vulnerability quantification scheme for smart grid.

    Science.gov (United States)

    Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik

    2014-01-01

    A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.

  10. Micro-RNA quantification using DNA polymerase and pyrophosphate quantification.

    Science.gov (United States)

    Yu, Hsiang-Ping; Hsiao, Yi-Ling; Pan, Hung-Yin; Huang, Chih-Hung; Hou, Shao-Yi

    2011-12-15

    A rapid quantification method for micro-RNA based on DNA polymerase activity and pyrophosphate quantification has been developed. The tested micro-RNA serves as the primer, unlike the DNA primer in all DNA sequencing methods, and the DNA probe serves as the template for DNA replication. After the DNA synthesis, the pyrophosphate detection and quantification indicate the existence and quantity of the tested miRNA. Five femtomoles of the synthetic RNA could be detected. In 20-100 μg RNA samples purified from SiHa cells, the measurement was done using the proposed assay in which hsa-miR-16 and hsa-miR-21 are 0.34 fmol/μg RNA and 0.71 fmol/μg RNA, respectively. This simple and inexpensive assay takes less than 5 min after total RNA purification and preparation. The quantification is not affected by the pre-miRNA which cannot serve as the primer for the DNA synthesis in this assay. This assay is general for the detection of the target RNA or DNA with a known matched DNA template probe, which could be widely used for detection of small RNA, messenger RNA, RNA viruses, and DNA. Therefore, the method could be widely used in RNA and DNA assays. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. 561 SOURCE SPECIFIC QUANTIFICATION, CHARACTERISATION ...

    African Journals Online (AJOL)

    Osondu

    2013-09-02

    Sep 2, 2013 ... efficient and sustainable waste management. This study is the quantification, characterisation ... For efficient and sustainable solid waste management in Lapai it is recommended that Lapai Local Government Area Council .... a shop, a market stall, an eatery/restaurant, a hotel or any commercial enterprise.

  12. COMPARISON OF WIRELESS NETWORK OVER WIRED NETWORK AND ITS TYPE

    OpenAIRE

    Shikha Shukla; Meghana K M; Manjunath C R; SantoshNaik

    2017-01-01

    Wireless network has become one of the major requirements in today world. People expect wireless network in home, shopping mall, university etc. Nowadays, we cannot imagine the life without network. In this paper focuses on what the different types of networks are. Why wired network is preferred over wireless network. We will further compare the wired network with the wireless network and also present different type of wireless network. This paper provides the basic knowledge about Wired, Wir...

  13. Friendship Networks

    OpenAIRE

    Jan K. Brueckner

    2004-01-01

    Building upon a long tradition in sociology, economists have recently turned their attention to the analysis of social networks. The present paper adds to this emerging literature by proposing a different approach to social-network formation. As in the model of Jackson and Wolinsky (1996), formation of a link between two individuals requires two-sided investments in the present framework. But in contrast to their approach, where the required investments are exogenously specified and link form...

  14. Direct Quantification of Cd2+ in the Presence of Cu2+ by a Combination of Anodic Stripping Voltammetry Using a Bi-Film-Modified Glassy Carbon Electrode and an Artificial Neural Network

    OpenAIRE

    Zhao, Guo; Wang, Hui; Liu, Gang

    2017-01-01

    In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd2+ in the presence of Cu2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu2+ concentration on the stripping response to Cd2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of...

  15. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN

    Directory of Open Access Journals (Sweden)

    Sanz Nuria

    2009-10-01

    Full Text Available Abstract Background Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. Methods In order to summarise the current situation in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching an agreement on a common classification was the initial step in the development of the survey. Results The ECRIN transnational working group on regulation, composed of experts in the field of clinical research from ten European countries, defined seven major categories of clinical research that seem relevant from both the regulatory and the scientific points of view, and correspond to congruent definitions in all countries: clinical trials on medicinal products; clinical trials on medical devices; other therapeutic trials (including surgery trials, transplantation trials, transfusion trials, trials with cell therapy, etc.; diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.; and epidemiology studies. Our classification was essential to develop a survey focused on protocol submission to ethics committees and competent authorities, procedures for amendments, requirements for sponsor and insurance, and adverse event reporting following five main phases: drafting, consensus, data collection, validation, and finalising. Conclusion The list of clinical research categories as used for the survey could serve as a contribution to the, much needed, task of harmonisation and

  16. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN).

    Science.gov (United States)

    Kubiak, Christine; de Andres-Trelles, Fernando; Kuchinke, Wolfgang; Huemer, Karl-Heinz; Thirstrup, Steffen; Whitfield, Kate; Libersa, Christian; Barraud, Béatrice; Grählert, Xina; Dreier, Gabriele; Grychtol, Ruth; Temesvari, Zsuzsa; Blasko, Gyorgy; Kardos, Gabriella; O'Brien, Timothy; Cooney, Margaret; Gaynor, Siobhan; Schieppati, Arrigo; Sanz, Nuria; Hernandez, Raquel; Asker-Hagelberg, Charlotte; Johansson, Hanna; Bourne, Sue; Byrne, Jane; Asghar, Adeeba; Husson, Jean-Marc; Gluud, Christian; Demotes-Mainard, Jacques

    2009-10-16

    Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. In order to summarise the current situation in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN) developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching an agreement on a common classification was the initial step in the development of the survey. The ECRIN transnational working group on regulation, composed of experts in the field of clinical research from ten European countries, defined seven major categories of clinical research that seem relevant from both the regulatory and the scientific points of view, and correspond to congruent definitions in all countries: clinical trials on medicinal products; clinical trials on medical devices; other therapeutic trials (including surgery trials, transplantation trials, transfusion trials, trials with cell therapy, etc.); diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.); and epidemiology studies. Our classification was essential to develop a survey focused on protocol submission to ethics committees and competent authorities, procedures for amendments, requirements for sponsor and insurance, and adverse event reporting following five main phases: drafting, consensus, data collection, validation, and finalising. The list of clinical research categories as used for the survey could serve as a contribution to the, much needed, task of harmonisation and simplification of the regulatory requirements for clinical research

  17. Common definition for categories of clinical research: a prerequisite for a survey on regulatory requirements by the European Clinical Research Infrastructures Network (ECRIN)

    LENUS (Irish Health Repository)

    Kubiak, Christine

    2009-10-16

    Abstract Background Thorough knowledge of the regulatory requirements is a challenging prerequisite for conducting multinational clinical studies in Europe given their complexity and heterogeneity in regulation and perception across the EU member states. Methods In order to summarise the current situation in relation to the wide spectrum of clinical research, the European Clinical Research Infrastructures Network (ECRIN) developed a multinational survey in ten European countries. However a lack of common classification framework for major categories of clinical research was identified, and therefore reaching an agreement on a common classification was the initial step in the development of the survey. Results The ECRIN transnational working group on regulation, composed of experts in the field of clinical research from ten European countries, defined seven major categories of clinical research that seem relevant from both the regulatory and the scientific points of view, and correspond to congruent definitions in all countries: clinical trials on medicinal products; clinical trials on medical devices; other therapeutic trials (including surgery trials, transplantation trials, transfusion trials, trials with cell therapy, etc.); diagnostic studies; clinical research on nutrition; other interventional clinical research (including trials in complementary and alternative medicine, trials with collection of blood or tissue samples, physiology studies, etc.); and epidemiology studies. Our classification was essential to develop a survey focused on protocol submission to ethics committees and competent authorities, procedures for amendments, requirements for sponsor and insurance, and adverse event reporting following five main phases: drafting, consensus, data collection, validation, and finalising. Conclusion The list of clinical research categories as used for the survey could serve as a contribution to the, much needed, task of harmonisation and simplification of the

  18. Neural networks for nuclear spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Keller, P.E.; Kangas, L.J.; Hashem, S.; Kouzes, R.T. [Pacific Northwest Lab., Richland, WA (United States)] [and others

    1995-12-31

    In this paper two applications of artificial neural networks (ANNs) in nuclear spectroscopy analysis are discussed. In the first application, an ANN assigns quality coefficients to alpha particle energy spectra. These spectra are used to detect plutonium contamination in the work environment. The quality coefficients represent the levels of spectral degradation caused by miscalibration and foreign matter affecting the instruments. A set of spectra was labeled with quality coefficients by an expert and used to train the ANN expert system. Our investigation shows that the expert knowledge of spectral quality can be transferred to an ANN system. The second application combines a portable gamma-ray spectrometer with an ANN. In this system the ANN is used to automatically identify, radioactive isotopes in real-time from their gamma-ray spectra. Two neural network paradigms are examined: the linear perception and the optimal linear associative memory (OLAM). A comparison of the two paradigms shows that OLAM is superior to linear perception for this application. Both networks have a linear response and are useful in determining the composition of an unknown sample when the spectrum of the unknown is a linear superposition of known spectra. One feature of this technique is that it uses the whole spectrum in the identification process instead of only the individual photo-peaks. For this reason, it is potentially more useful for processing data from lower resolution gamma-ray spectrometers. This approach has been tested with data generated by Monte Carlo simulations and with field data from sodium iodide and Germanium detectors. With the ANN approach, the intense computation takes place during the training process. Once the network is trained, normal operation consists of propagating the data through the network, which results in rapid identification of samples. This approach is useful in situations that require fast response where precise quantification is less important.

  19. Full left ventricle quantification via deep multitask relationships learning.

    Science.gov (United States)

    Xue, Wufeng; Brahm, Gary; Pandey, Sachin; Leung, Stephanie; Li, Shuo

    2018-01-01

    Cardiac left ventricle (LV) quantification is among the most clinically important tasks for identification and diagnosis of cardiac disease. However, it is still a task of great challenge due to the high variability of cardiac structure across subjects and the complexity of temporal dynamics of cardiac sequences. Full quantification, i.e., to simultaneously quantify all LV indices including two areas (cavity and myocardium), six regional wall thicknesses (RWT), three LV dimensions, and one phase (Diastole or Systole), is even more challenging since the ambiguous correlations existing among these indices may impinge upon the convergence and generalization of the learning procedure. In this paper, we propose a deep multitask relationship learning network (DMTRL) for full LV quantification. The proposed DMTRL first obtains expressive and robust cardiac representations with a deep convolution neural network (CNN); then models the temporal dynamics of cardiac sequences effectively with two parallel recurrent neural network (RNN) modules. After that, it estimates the three types of LV indices under a Bayesian framework that is capable of learning multitask relationships automatically, and estimates the cardiac phase with a softmax classifier. The CNN representation, RNN temporal modeling, Bayesian multitask relationship learning, and softmax classifier establish an effective and integrated network which can be learned in an end-to-end manner. The obtained task covariance matrix captures the correlations existing among these indices, therefore leads to accurate estimation of LV indices and cardiac phase. Experiments on MR sequences of 145 subjects show that DMTRL achieves high accurate prediction, with average mean absolute error of 180 mm2, 1.39 mm, 2.51 mm for areas, RWT, dimensions and error rate of 8.2% for the phase classification. This endows our method a great potential in comprehensive clinical assessment of global, regional and dynamic cardiac function

  20. Development of artificial neural network models based on experimental data of response surface methodology to establish the nutritional requirements of digestible lysine, methionine, and threonine in broiler chicks.

    Science.gov (United States)

    Mehri, M

    2012-12-01

    An artificial neural network (ANN) approach was used to develop feed-forward multilayer perceptron models to estimate the nutritional requirements of digestible lysine (dLys), methionine (dMet), and threonine (dThr) in broiler chicks. Sixty data lines representing response of the broiler chicks during 3 to 16 d of age to dietary levels of dLys (0.88-1.32%), dMet (0.42-0.58%), and dThr (0.53-0.87%) were obtained from literature and used to train the networks. The prediction values of ANN were compared with those of response surface methodology to evaluate the fitness of these 2 methods. The models were tested using R(2), mean absolute deviation, mean absolute percentage error, and absolute average deviation. The random search algorithm was used to optimize the developed ANN models to estimate the optimal values of dietary dLys, dMet, and dThr. The ANN models were used to assess the relative importance of each dietary input on the bird performance using sensitivity analysis. The statistical evaluations revealed the higher accuracy of ANN to predict the bird performance compared with response surface methodology models. The optimization results showed that the maximum BW gain may be obtained with dietary levels of 1.11, 0.51, and 0.78% of dLys, dMet, and dThr, respectively. Minimum feed conversion ratio may be achieved with dietary levels of 1.13, 0.54, 0.78% of dLys, dMet, and dThr, respectively. The sensitivity analysis on the models indicated that dietary Lys is the most important variable in the growth performance of the broiler chicks, followed by dietary Thr and Met. The results of this research revealed that the experimental data of a response-surface-methodology design could be successfully used to develop the well-designed ANN for pattern recognition of bird growth and optimization of nutritional requirements. The comparison between the 2 methods also showed that the statistical methods may have little effect on the ideal ratios of dMet and dThr to dLys in

  1. Overlay networks toward information networking

    CERN Document Server

    Tarkoma, Sasu

    2010-01-01

    With their ability to solve problems in massive information distribution and processing, while keeping scaling costs low, overlay systems represent a rapidly growing area of R&D with important implications for the evolution of Internet architecture. Inspired by the author's articles on content based routing, Overlay Networks: Toward Information Networking provides a complete introduction to overlay networks. Examining what they are and what kind of structures they require, the text covers the key structures, protocols, and algorithms used in overlay networks. It reviews the current state of th

  2. Hybrid Neural-Network: Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics Developed and Demonstrated

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2002-01-01

    As part of the NASA Aviation Safety Program, a unique model-based diagnostics method that employs neural networks and genetic algorithms for aircraft engine performance diagnostics has been developed and demonstrated at the NASA Glenn Research Center against a nonlinear gas turbine engine model. Neural networks are applied to estimate the internal health condition of the engine, and genetic algorithms are used for sensor fault detection, isolation, and quantification. This hybrid architecture combines the excellent nonlinear estimation capabilities of neural networks with the capability to rank the likelihood of various faults given a specific sensor suite signature. The method requires a significantly smaller data training set than a neural network approach alone does, and it performs the combined engine health monitoring objectives of performance diagnostics and sensor fault detection and isolation in the presence of nominal and degraded engine health conditions.

  3. Communications infrastructure requirements for telemedicine/telehealth in the context of planning for and responding to natural disasters: Considering the need for shared regional networks

    Science.gov (United States)

    Scott, John Carver

    1991-01-01

    During the course of recent years the frequency and magnitude of major disasters - of natural, technological, or ecological origin - have made the world community dramatically aware of the immense losses of human life and economic resources that are caused regularly by such calamities. Particularly hard hit are developing countries, for whom the magnitude of disasters frequently outstrips the ability of the society to cope with them. In many cases this situation can be prevented, and the recent trend in disaster management has been to emphasize the importance of preparedness and mitigation as a means of prevention. In cases of disaster, a system is needed to respond to relief requirements, particularly the delivery of medical care. There is no generic telecommunications infrastructure appropriate for the variety of applications in medical care and disaster management. The need to integrate telemedicine/telehealth into shared regional disaster management telecommunications networks is discussed. Focus is on the development of infrastructure designed to serve the needs of disaster prone regions of the developing world.

  4. The H3K27 Demethylase JMJD3 Is Required for Maintenance of the Embryonic Respiratory Neuronal Network, Neonatal Breathing, and Survival

    Directory of Open Access Journals (Sweden)

    Thomas Burgold

    2012-11-01

    Full Text Available JMJD3 (KDM6B antagonizes Polycomb silencing by demethylating lysine 27 on histone H3. The interplay of methyltransferases and demethylases at this residue is thought to underlie critical cell fate transitions, and the dynamics of H3K27me3 during neurogenesis posited for JMJD3 a critical role in the acquisition of neural fate. Despite evidence of its involvement in early neural commitment, however, its role in the emergence and maturation of the mammalian CNS remains unknown. Here, we inactivated Jmjd3 in the mouse and found that its loss causes perinatal lethality with the complete and selective disruption of the pre-Bötzinger complex (PBC, the pacemaker of the respiratory rhythm generator. Through genetic and electrophysiological approaches, we show that the enzymatic activity of JMJD3 is selectively required for the maintenance of the PBC and controls critical regulators of PBC activity, uncovering an unanticipated role of this enzyme in the late structuring and function of neuronal networks.

  5. Mapping and Quantification of Vascular Branching in Plants, Animals and Humans by VESGEN Software

    Science.gov (United States)

    Parsons-Wingerter, P. A.; Vickerman, M. B.; Keith, P. A.

    2010-01-01

    Humans face daunting challenges in the successful exploration and colonization of space, including adverse alterations in gravity and radiation. The Earth-determined biology of plants, animals and humans is significantly modified in such extraterrestrial environments. One physiological requirement shared by larger plants and animals with humans is a complex, highly branching vascular system that is dynamically responsive to cellular metabolism, immunological protection and specialized cellular/tissue function. VESsel GENeration (VESGEN) Analysis has been developed as a mature beta version, pre-release research software for mapping and quantification of the fractal-based complexity of vascular branching. Alterations in vascular branching pattern can provide informative read-outs of altered vascular regulation. Originally developed for biomedical applications in angiogenesis, VESGEN 2D has provided novel insights into the cytokine, transgenic and therapeutic regulation of angiogenesis, lymphangiogenesis and other microvascular remodeling phenomena. Vascular trees, networks and tree-network composites are mapped and quantified. Applications include disease progression from clinical ophthalmic images of the human retina; experimental regulation of vascular remodeling in the mouse retina; avian and mouse coronary vasculature, and other experimental models in vivo. We envision that altered branching in the leaves of plants studied on ISS such as Arabidopsis thaliana cans also be analyzed.

  6. Direct Quantification of Cd2+ in the Presence of Cu2+ by a Combination of Anodic Stripping Voltammetry Using a Bi-Film-Modified Glassy Carbon Electrode and an Artificial Neural Network

    Science.gov (United States)

    Zhao, Guo; Wang, Hui; Liu, Gang

    2017-01-01

    In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd2+ in the presence of Cu2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu2+ concentration on the stripping response to Cd2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of Cd2+ in the presence of Cu2+ was investigated. A BP-ANN with two inputs and one output was used to establish the nonlinear relationship between the concentration of Cd2+ and the stripping peak currents of Cu2+ and Cd2+. The factors affecting the SWASV detection of Cd2+ and the key parameters of the BP-ANN were optimized. Moreover, the direct calibration model (i.e., adding 0.1 mM ferrocyanide before detection), the BP-ANN model and other prediction models were compared to verify the prediction performance of these models in terms of their mean absolute errors (MAEs), root mean square errors (RMSEs) and correlation coefficients. The BP-ANN model exhibited higher prediction accuracy than the direct calibration model and the other prediction models. Finally, the proposed method was used to detect Cd2+ in soil samples with satisfactory results. PMID:28671628

  7. Direct Quantification of Cd2+ in the Presence of Cu2+ by a Combination of Anodic Stripping Voltammetry Using a Bi-Film-Modified Glassy Carbon Electrode and an Artificial Neural Network.

    Science.gov (United States)

    Zhao, Guo; Wang, Hui; Liu, Gang

    2017-07-03

    Abstract: In this study, a novel method based on a Bi/glassy carbon electrode (Bi/GCE) for quantitatively and directly detecting Cd2+ in the presence of Cu2+ without further electrode modifications by combining square-wave anodic stripping voltammetry (SWASV) and a back-propagation artificial neural network (BP-ANN) has been proposed. The influence of the Cu2+ concentration on the stripping response to Cd2+ was studied. In addition, the effect of the ferrocyanide concentration on the SWASV detection of Cd2+ in the presence of Cu2+ was investigated. A BP-ANN with two inputs and one output was used to establish the nonlinear relationship between the concentration of Cd2+ and the stripping peak currents of Cu2+ and Cd2+. The factors affecting the SWASV detection of Cd2+ and the key parameters of the BP-ANN were optimized. Moreover, the direct calibration model (i.e., adding 0.1 mM ferrocyanide before detection), the BP-ANN model and other prediction models were compared to verify the prediction performance of these models in terms of their mean absolute errors (MAEs), root mean square errors (RMSEs) and correlation coefficients. The BP-ANN model exhibited higher prediction accuracy than the direct calibration model and the other prediction models. Finally, the proposed method was used to detect Cd2+ in soil samples with satisfactory results.

  8. Adenovirus Particle Quantification in Cell Lysates Using Light Scattering.

    Science.gov (United States)

    Hohl, Adrian; Ramms, Anne Sophie; Dohmen, Christian; Mantwill, Klaus; Bielmeier, Andrea; Kolk, Andreas; Ruppert, Andreas; Nawroth, Roman; Holm, Per Sonne

    2017-10-01

    Adenoviral vector production for therapeutic applications is a well-established routine process. However, current methods for measurement of adenovirus particle titers as a quality characteristic require highly purified virus preparations. While purified virus is typically obtained in the last step of downstream purification, rapid and reliable methods for adenovirus particle quantification in intermediate products and crude lysates to allow for optimization and validation of cell cultures and intermediate downstream processing steps are currently not at hand. Light scattering is an established process to measure virus particles' size, though due to cell impurities, adequate quantification of adenovirus particles in cell lysates by light scattering has been impossible until today. This report describes a new method using light scattering to measure virus concentration in nonpurified cell lysates. Here we report application of light scattering, a routine method to measure virus particle size, to virus quantification in enzymatically conditioned crude lysates. Samples are incubated with phospholipase A2 and benzonase and filtered through a 0.22 μm filter cartridge prior to quantification by light scattering. Our results show that this treatment provides a precise method for fast and easy determination of total adenovirus particle numbers in cell lysates and is useful to monitor virus recovery throughout all downstream processing.

  9. Quantification of myocardial perfusion by cardiovascular magnetic resonance

    Directory of Open Access Journals (Sweden)

    Jerosch-Herold Michael

    2010-10-01

    Full Text Available Abstract The potential of contrast-enhanced cardiovascular magnetic resonance (CMR for a quantitative assessment of myocardial perfusion has been explored for more than a decade now, with encouraging results from comparisons with accepted "gold standards", such as microspheres used in the physiology laboratory. This has generated an increasing interest in the requirements and methodological approaches for the non-invasive quantification of myocardial blood flow by CMR. This review provides a synopsis of the current status of the field, and introduces the reader to the technical aspects of perfusion quantification by CMR. The field has reached a stage, where quantification of myocardial perfusion is no longer a claim exclusive to nuclear imaging techniques. CMR may in fact offer important advantages like the absence of ionizing radiation, high spatial resolution, and an unmatched versatility to combine the interrogation of the perfusion status with a comprehensive tissue characterization. Further progress will depend on successful dissemination of the techniques for perfusion quantification among the CMR community.

  10. The necessity of operational risk management and quantification

    Directory of Open Access Journals (Sweden)

    Barbu Teodora Cristina

    2008-04-01

    Full Text Available Beginning with the fact that performant strategies of the financial institutions have programmes and management procedures for the banking risks, which have as main objective to minimize the probability of risk generation and the bank’s potential exposure, this paper wants to present the operational risk management and quantification methods. Also it presents the modality of minimum capital requirement for the operational risk. Therefore, the first part presents the conceptual approach of the operational risks through the point of view of the financial institutions exposed to this type of risk. The second part describes the management and evaluation methods for the operational risk. The final part of this article presents the approach assumed by a financial institution with a precise purpose: the quantification of the minimum capital requirements of the operational risk.

  11. EEG-Based Quantification of Cortical Current Density and Dynamic Causal Connectivity Generalized across Subjects Performing BCI-Monitored Cognitive Tasks

    Directory of Open Access Journals (Sweden)

    Hristos Courellis

    2017-05-01

    Full Text Available Quantification of dynamic causal interactions among brain regions constitutes an important component of conducting research and developing applications in experimental and translational neuroscience. Furthermore, cortical networks with dynamic causal connectivity in brain-computer interface (BCI applications offer a more comprehensive view of brain states implicated in behavior than do individual brain regions. However, models of cortical network dynamics are difficult to generalize across subjects because current electroencephalography (EEG signal analysis techniques are limited in their ability to reliably localize sources across subjects. We propose an algorithmic and computational framework for identifying cortical networks across subjects in which dynamic causal connectivity is modeled among user-selected cortical regions of interest (ROIs. We demonstrate the strength of the proposed framework using a “reach/saccade to spatial target” cognitive task performed by 10 right-handed individuals. Modeling of causal cortical interactions was accomplished through measurement of cortical activity using (EEG, application of independent component clustering to identify cortical ROIs as network nodes, estimation of cortical current density using cortically constrained low resolution electromagnetic brain tomography (cLORETA, multivariate autoregressive (MVAR modeling of representative cortical activity signals from each ROI, and quantification of the dynamic causal interaction among the identified ROIs using the Short-time direct Directed Transfer function (SdDTF. The resulting cortical network and the computed causal dynamics among its nodes exhibited physiologically plausible behavior, consistent with past results reported in the literature. This physiological plausibility of the results strengthens the framework's applicability in reliably capturing complex brain functionality, which is required by applications, such as diagnostics and BCI.

  12. Protocol for Quantification of Defects in Natural Fibres for Composites

    OpenAIRE

    Ulrich Andreas Mortensen; Bo Madsen

    2014-01-01

    Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of dif...

  13. Quantification of myocardial perfusion by cardiovascular magnetic resonance

    OpenAIRE

    Jerosch-Herold Michael

    2010-01-01

    Abstract The potential of contrast-enhanced cardiovascular magnetic resonance (CMR) for a quantitative assessment of myocardial perfusion has been explored for more than a decade now, with encouraging results from comparisons with accepted "gold standards", such as microspheres used in the physiology laboratory. This has generated an increasing interest in the requirements and methodological approaches for the non-invasive quantification of myocardial blood flow by CMR. This review provides a...

  14. Validation of the Concept of a Common Typical Time of Disease Duration for Hepatocellular Carcinoma Patients Using the Fisher Information Processing of Tumor Imaging Results Combined With Network Phenotyping Strategy Quantification of Individual Patient Clinical Profile Patterns.

    Science.gov (United States)

    Pančoška, Petr; Skála, Lubomír; Nešetřil, Jaroslav; Carr, Brian I

    2015-08-01

    A primary goal of current clinical cancer research is the identification of prognostic tumor subtypes. It is increasingly clear that tumor growth depends on both internal tumor factors, and factors that are external to the tumor, such as microenvironment. We recently showed that parameter values alone are less important than the patterns of all patient parameters together for the identification of prognostic subtypes and have identified a network phenotyping strategy method to quantitatively describe the dependency of the tumor on the environment, to characterize hepatocellular carcinoma (HCC) subtypes. We have also shown that information about tumor mass together with patterns of other prognostic factors is related to survival. We now use a different patient cohort to validate this prognostic approach. A main finding is our identification of a common time of total disease duration (TDD) for every HCC patient. Clinical prognosis at the time of baseline patient evaluation is then calculable as the difference between TDD and the time from disease onset to diagnosis (T(onset)). We show that the total pattern of all parameter values and the differences in the relationships between this pattern and a reference pattern that, together with the tumor mass, best reflects the patient's prognosis at baseline. Our approach led us to identify 15 different composite HCC subtypes. Our results highlight the nearly identical TDD in all patients, which must therefore be a characteristic of the HCC disease, as opposed to the variable quantity of T(onset), which is impacted by multiple macro- and micro-environmental factors. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Social Networks and Network Structures

    Science.gov (United States)

    2006-11-01

    Research in Command & Control • Latent Semantic Analysis – Team communication – Emergent team dynamics – Shared situation awareness • Dynamic Network...requirements – Information technology requirements 28 LSA Essentials of Latent Semantic Analysis 29 Communication Analysis • Goal: Automatically monitor and

  16. Evidence for a Proton Transfer Network and a Required Persulfide-Bond-Forming Cysteine Residue in Ni-Containing Carbon Monoxide Dehydrogenases

    Energy Technology Data Exchange (ETDEWEB)

    Eun Jin Kim; Jian Feng; Matthew R. Bramlett; Paul A. Lindahl

    2004-05-18

    OAK-B135 Carbon monoxide dehydrogenase from Moorella thermoacetica catalyzes the reversible oxidation of CO to CO2 at a nickel-iron-sulfur active-site called the C-cluster. Mutants of a proposed proton transfer pathway and of a cysteine residue recently found to form a persulfide bond with the C-cluster were characterized. Four semi-conserved histidine residues were individually mutated to alanine. His116 and His122 were essential to catalysis, while His113 and His119 attenuated catalysis but were not essential. Significant activity was ''rescued'' by a double mutant where His116 was replaced by Ala and His was also introduced at position 115. Activity was also rescued in double mutants where His122 was replaced by Ala and His was simultaneously introduced at either position 121 or 123. Activity was also ''rescued'' by replacing His with Cys at position 116. Mutation of conserved Lys587 near the C-cluster attenuated activity but did not eliminate it. Activity was virtually abolished in a double mutant where Lys587 and His113 were both changed to Ala. Mutations of conserved Asn284 also attenuated activity. These effects suggest the presence of a network of amino acid residues responsible for proton transfer rather than a single linear pathway. The Ser mutant of the persulfide-forming Cys316 was essentially inactive and displayed no EPR signals originating from the C-cluster. Electronic absorption and metal analysis suggests that the C-cluster is absent in this mutant. The persulfide bond appears to be essential for either the assembly or stability of the C-cluster, and/or for eliciting the redox chemistry of the C-cluster required for catalytic activity.

  17. Uncertainty Quantification for Cargo Hold Fires

    CERN Document Server

    DeGennaro, Anthony M; Martinelli, Luigi; Rowley, Clarence W

    2015-01-01

    The purpose of this study is twofold -- first, to introduce the application of high-order discontinuous Galerkin methods to buoyancy-driven cargo hold fire simulations, second, to explore statistical variation in the fluid dynamics of a cargo hold fire given parameterized uncertainty in the fire source location and temperature. Cargo hold fires represent a class of problems that require highly-accurate computational methods to simulate faithfully. Hence, we use an in-house discontinuous Galerkin code to treat these flows. Cargo hold fires also exhibit a large amount of uncertainty with respect to the boundary conditions. Thus, the second aim of this paper is to quantify the resulting uncertainty in the flow, using tools from the uncertainty quantification community to ensure that our efforts require a minimal number of simulations. We expect that the results of this study will provide statistical insight into the effects of fire location and temperature on cargo fires, and also assist in the optimization of f...

  18. A modified extraction protocol enables detection and quantification of celiac disease-related gluten proteins from wheat

    NARCIS (Netherlands)

    Broeck, van den H.C.; America, A.H.P.; Smulders, M.J.M.; Bosch, H.J.; Hamer, R.J.; Gilissen, L.J.W.J.; Meer, van der I.M.

    2009-01-01

    The detection, analysis, and quantification of individual celiac disease (CD) immune responsive gluten proteins in wheat and related cereals (barley, rye) require an adequate and reliable extraction protocol. Because different types of gluten proteins behave differently in terms of solubility,

  19. Comparative cytotoxic and spectrophotometric quantification of ...

    African Journals Online (AJOL)

    The comparative cytotoxic and spectrophotometric quantification of phytochemicals of the methanol extracts of the leaf and root bark of Securinega virosa was carried out. Phytochemical screening and spectrophotometric quantification of total flavonoids and phenolics of the extracts were carried out using standard reported ...

  20. [Quantification of tissue perfusion with novel ultrasound methods].

    Science.gov (United States)

    Krix, M; Kauczor, H-U; Delorme, S

    2003-10-01

    Perfusion describes an important parameter of tissue vitality, e. g. of the myocardium, brain, or the kidney. In malign tumours, perfusion is of particular interest for characterization and prognosis. In addition, new pro- or anti-angiogenic therapies require a functional imaging which is suitable to quantify vascularity. Sonographic methods for the detection of microvascularity, particularly related smaller than those amenable to Doppler ultrasound, are reviewed. The main focus deals with the explanation of contrast-enhanced sonography using replenishment kinetics of microbubbles which provides a comprehensive quantification of tissue perfusion. Alterations of the microvascularity e. g., under anti-angiogenic therapy, can be depicted in experimental studies using this novel approach. Further clinical applications can be the quantification of the perfusion in the myocardium, the brain, or the kidney. New approaches to optimize the theoretical model to describe the replenishment, and novel technical developments are discussed.

  1. A highly sensitive method for quantification of iohexol

    DEFF Research Database (Denmark)

    Schulz, A.; Boeringer, F.; Swifka, J.

    2014-01-01

    lohexol (1-N,3-N-bis(2,3-dihydroxypropyl)-5-IN-(2,3-dihydroxypropyl) acetamide-2,4,6-triiodobenzene1,3-dicarboxamide) is used for accurate determination of the glomerular filtration rate (GFR) in chronic kidney disease (CKD) patients. However, high iohexol amounts might lead to adverse effects in...... in organisms. In order to minimize the iohexol dosage required for the GFR determination in humans, the development of a sensitive quantification method is essential. Therefore, the objective of our preclinical study was to establish and validate a simple and robust liquid......-spectrometry based method has been proved to be sensitive, selective and suitable for the quantification of iohexol in serum. Due to high sensitivity of this novel method the iohexol application dose as well as the sampling time in the clinical routine could be reduced in the future in order to further minimize side...

  2. Critical points of DNA quantification by real-time PCR – effects of DNA extraction method and sample matrix on quantification of genetically modified organisms

    Directory of Open Access Journals (Sweden)

    Žel Jana

    2006-08-01

    Full Text Available Abstract Background Real-time PCR is the technique of choice for nucleic acid quantification. In the field of detection of genetically modified organisms (GMOs quantification of biotech products may be required to fulfil legislative requirements. However, successful quantification depends crucially on the quality of the sample DNA analyzed. Methods for GMO detection are generally validated on certified reference materials that are in the form of powdered grain material, while detection in routine laboratories must be performed on a wide variety of sample matrixes. Due to food processing, the DNA in sample matrixes can be present in low amounts and also degraded. In addition, molecules of plant origin or from other sources that affect PCR amplification of samples will influence the reliability of the quantification. Further, the wide variety of sample matrixes presents a challenge for detection laboratories. The extraction method must ensure high yield and quality of the DNA obtained and must be carefully selected, since even components of DNA extraction solutions can influence PCR reactions. GMO quantification is based on a standard curve, therefore similarity of PCR efficiency for the sample and standard reference material is a prerequisite for exact quantification. Little information on the performance of real-time PCR on samples of different matrixes is available. Results Five commonly used DNA extraction techniques were compared and their suitability for quantitative analysis was assessed. The effect of sample matrix on nucleic acid quantification was assessed by comparing 4 maize and 4 soybean matrixes. In addition 205 maize and soybean samples from routine analysis were analyzed for PCR efficiency to assess variability of PCR performance within each sample matrix. Together with the amount of DNA needed for reliable quantification, PCR efficiency is the crucial parameter determining the reliability of quantitative results, therefore it was

  3. Airborne Network Optimization with Dynamic Network Update

    Science.gov (United States)

    2015-03-26

    Faculty Department of Electrical and Computer Engineering Graduate School of Engineering and Management Air Force Institute of Technology Air University...require small amounts of network bandwidth to perform routing. This thesis advocates the use of Kalman filters to predict network congestion in...airborne networks. Intelligent agents can make use of Kalman filter predictions to make informed decisions to manage communication in airborne networks. The

  4. In situ analysis of tyrosine phosphorylation networks by FLIM on cell arrays.

    Science.gov (United States)

    Grecco, Hernán E; Roda-Navarro, Pedro; Girod, Andreas; Hou, Jian; Frahm, Thomas; Truxius, Dina C; Pepperkok, Rainer; Squire, Anthony; Bastiaens, Philippe I H

    2010-06-01

    Extracellular stimuli are transduced inside the cell by posttranslational modifications (PTMs), such as phosphorylation, of proteins in signaling networks. Insight into the structure of these networks requires quantification of PTM levels in individual cells. Fluorescence resonance energy transfer (FRET) measured by fluorescence lifetime imaging microscopy (FLIM) is a powerful tool to image PTM levels in situ. FLIM on cell arrays that express fluorescent protein fusions can quantify tyrosine phosphorylation patterns in large networks in individual cells. We identified tyrosine kinase substrates by imaging their phosphorylation levels after inhibition of protein tyrosine phosphatases. Analysis of the correlation between protein phosphorylation and expression levels at single cell resolution allowed us to identify positive feedback motifs. Using FLIM on cell arrays (CA-FLIM), we uncovered components that transduce signals from epidermal growth factor receptor.

  5. Cytochrome c oxidase subunit 1-based human RNA quantification to enhance mRNA profiling in forensic biology

    Directory of Open Access Journals (Sweden)

    Dong Zhao

    2017-01-01

    Full Text Available RNA analysis offers many potential applications in forensic science, and molecular identification of body fluids by analysis of cell-specific RNA markers represents a new technique for use in forensic cases. However, due to the nature of forensic materials that often admixed with nonhuman cellular components, human-specific RNA quantification is required for the forensic RNA assays. Quantification assay for human RNA has been developed in the present study with respect to body fluid samples in forensic biology. The quantitative assay is based on real-time reverse transcription-polymerase chain reaction of mitochondrial RNA cytochrome c oxidase subunit I and capable of RNA quantification with high reproducibility and a wide dynamic range. The human RNA quantification improves the quality of mRNA profiling in the identification of body fluids of saliva and semen because the quantification assay can exclude the influence of nonhuman components and reduce the adverse affection from degraded RNA fragments.

  6. Generation of structural MR images from amyloid PET: Application to MR-less quantification.

    Science.gov (United States)

    Choi, Hongyoon; Lee, Dong Soo

    2017-12-07

    Structural magnetic resonance (MR) images concomitantly acquired with PET images can provide crucial anatomical information for precise quantitative analysis. However, in the clinical setting, not all the subjects have corresponding MR. Here, we developed a model to generate structural MR images from amyloid PET using deep generative networks. We applied our model to quantification of cortical amyloid load without structural MR. Methods: We used florbetapir PET and structural MR data of Alzheimer's Disease Neuroimaging Initiative database. The generative network was trained to generate realistic structural MR images from florbetapir PET images. After the training, the model was applied to the quantification of cortical amyloid load. PET images were spatially normalized to the template space using the generated MR and then standardized uptake value ratio (SUVR) of the target regions was measured by predefined regions-of-interests. A real MR-based quantification was used as the gold standard to measure the accuracy of our approach. Other MR-less methods, a normal PET template-based, multi-atlas PET template-based and PET segmentation-based normalization/quantification methods, were also tested. We compared performance of quantification methods using generated MR with that of MR-based and MR-less quantification methods. Results: Generated MR images from florbetapir PET showed visually similar signal patterns to the real MR. The structural similarity index between real and generated MR was 0.91 ± 0.04. Mean absolute error of SUVR of cortical composite regions estimated by the generated MR-based method was 0.04±0.03, which was significantly smaller than other MR-less methods (0.29±0.12 for the normal PET-template, 0.12±0.07 for multiatlas PET-template and 0.08±0.06 for PET segmentation-based methods). Bland-Altman plots revealed that the generated MR-based SUVR quantification was the closest to the SUVR values estimated by the real MR-based method. Conclusion

  7. A Network Traffic Control Enhancement Approach over Bluetooth Networks

    DEFF Research Database (Denmark)

    Son, L.T.; Schiøler, Henrik; Madsen, Ole Brun

    2003-01-01

    This paper analyzes network traffic control issues in Bluetooth data networks as convex optimization problem. We formulate the problem of maximizing of total network flows and minimizing the costs of flows. An adaptive distributed network traffic control scheme is proposed as an approximated...... solution of the stated optimization problem that satisfies quality of service requirements and topologically induced constraints in Bluetooth networks, such as link capacity and node resource limitations. The proposed scheme is decentralized and complies with frequent changes of topology as well...... as capacity limitations and flow requirements in the network. Simulation shows that the performance of Bluetooth networks could be improved by applying the adaptive distributed network traffic control scheme...

  8. Ada (Trade Name) Foundation Technology. Volume 9. Software Requirements for WIS (WWMCCS (World Wide Military Command and Control System) Information System) Network Protocol Prototypes.

    Science.gov (United States)

    1986-12-01

    Communications, January 1977. [ GAVI 83] Gavish, B., and S. L. Hantler, "An Algorithm for Optimal Route Selection in SNA Networks," IEEE Transactons on...can be found in [SCHW 80.1 Two works that deal with non-bifurcated flows are [COUR 8 11 and [ GAVI 83]. 4.3 Description of the Algorithm Problem 4.3.1

  9. Lung involvement quantification in chest radiographs; Quantificacao de comprometimento pulmonar em radiografias de torax

    Energy Technology Data Exchange (ETDEWEB)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M., E-mail: giacomini@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2014-12-15

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  10. Sentient networks

    Energy Technology Data Exchange (ETDEWEB)

    Chapline, G.

    1998-03-01

    The engineering problems of constructing autonomous networks of sensors and data processors that can provide alerts for dangerous situations provide a new context for debating the question whether man-made systems can emulate the cognitive capabilities of the mammalian brain. In this paper we consider the question whether a distributed network of sensors and data processors can form ``perceptions`` based on sensory data. Because sensory data can have exponentially many explanations, the use of a central data processor to analyze the outputs from a large ensemble of sensors will in general introduce unacceptable latencies for responding to dangerous situations. A better idea is to use a distributed ``Helmholtz machine`` architecture in which the sensors are connected to a network of simple processors, and the collective state of the network as a whole provides an explanation for the sensory data. In general communication within such a network will require time division multiplexing, which opens the door to the possibility that with certain refinements to the Helmholtz machine architecture it may be possible to build sensor networks that exhibit a form of artificial consciousness.

  11. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  12. Uncertainty Quantification in Aerodynamics Simulations Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  13. Data management and communication networks for man-machine interface system in Korea Advanced LIquid MEtal Reactor : Its functionality and design requirements

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Kyung Ho; Park, Gun Ok; Suh, Sang Moon; Kim, Jang Yeol; Kwon, Kee Choon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    The DAta management and COmmunication NETworks(DACONET), which it is designed as a subsystem for Man-Machine Interface System of Korea Advanced LIquid MEtal Reactor (KALIMER MMIS) and advanced design concept is approached, is described. The DACONET has its roles of providing the real-time data transmission and communication paths between MMIS systems, providing the quality data for protection, monitoring and control of KALIMER and logging the static and dynamic behavioral data during KALIMER operation. The DACONET is characterized as the distributed real-time system architecture with high performance. Future direction, in which advanced technology is being continually applied to Man-Machine Interface System development of Nuclear Power Plants, will be considered for designing data management and communication networks of KALIMER MMIS. 9 refs., 1 fig. (Author)

  14. Data requirements for road network inventory studies and road safety evaluations - guidelines and specifications. Road Infrastructure Safety Management Evaluation Tools (RISMET), Deliverable No. 3.

    OpenAIRE

    Candappa, N.L. Schermers, G. Stefan, C. & Elvik, R.

    2014-01-01

    Improving road safety is and has been a priority in most first world countries with the result that road crashes and resultant traffic injuries have thankfully been declining. However, improvements in road safety have also brought about new challenges for managing the remaining problems. One of these challenges is that the declining number of serious injury crashes mean a sparser distribution on the network whereby traditional reactive approaches such as black-spot analysis and remedial treat...

  15. Reproducible Research, Uncertainty Quantification, and Verification & Validation

    OpenAIRE

    Barba, Lorena A.

    2014-01-01

    Slides used with my presentation in the SIAM Uncertainty Quantification Conference 2014, Minisymposium on "The Reliability of Computational Research Findings: Reproducible Research, Uncertainty Quantification, and Verification & Validation." The talk used an audience response system to collect True/False or Yes/No opinions on 13 statements/questions: 1) Computer simulations create scientific knowledge.  2) Simulation is a method 3) A reproducible simulation does not need to be acc...

  16. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  17. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  18. Using networking and communications software in business

    CERN Document Server

    McBride, PK

    2014-01-01

    Using Networking and Communications Software in Business covers the importance of networks in a business firm, the benefits of computer communications within a firm, and the cost-benefit in putting up networks in businesses. The book is divided into six parts. Part I looks into the nature and varieties of networks, networking standards, and network software. Part II discusses the planning of a networked system, which includes analyzing the requirements for the network system, the hardware for the network, and network management. The installation of the network system and the network managemen

  19. Network Virtualization Protocols

    OpenAIRE

    Fornazarič, Nejc

    2014-01-01

    Server virtualization is a widespread and well known technology that has fundamentally changed the operations in data centers. Virtual servers and data storage can be fast and easily provisioned. On the other hand network requires a lot of administrative changes and configurations that increase time of adoption. The consequences of server virtualization are changed requirements for network resources therefore the next logical step is network virtualization. The different approaches for netwo...

  20. Performance of high-throughput DNA quantification methods

    Directory of Open Access Journals (Sweden)

    Chanock Stephen J

    2003-10-01

    Full Text Available Abstract Background The accuracy and precision of estimates of DNA concentration are critical factors for efficient use of DNA samples in high-throughput genotype and sequence analyses. We evaluated the performance of spectrophotometric (OD DNA quantification, and compared it to two fluorometric quantification methods, the PicoGreen® assay (PG, and a novel real-time quantitative genomic PCR assay (QG specific to a region at the human BRCA1 locus. Twenty-Two lymphoblastoid cell line DNA samples with an initial concentration of ~350 ng/uL were diluted to 20 ng/uL. DNA concentration was estimated by OD and further diluted to 5 ng/uL. The concentrations of multiple aliquots of the final dilution were measured by the OD, QG and PG methods. The effects of manual and robotic laboratory sample handling procedures on the estimates of DNA concentration were assessed using variance components analyses. Results The OD method was the DNA quantification method most concordant with the reference sample among the three methods evaluated. A large fraction of the total variance for all three methods (36.0–95.7% was explained by sample-to-sample variation, whereas the amount of variance attributable to sample handling was small (0.8–17.5%. Residual error (3.2–59.4%, corresponding to un-modelled factors, contributed a greater extent to the total variation than the sample handling procedures. Conclusion The application of a specific DNA quantification method to a particular molecular genetic laboratory protocol must take into account the accuracy and precision of the specific method, as well as the requirements of the experimental workflow with respect to sample volumes and throughput. While OD was the most concordant and precise DNA quantification method in this study, the information provided by the quantitative PCR assay regarding the suitability of DNA samples for PCR may be an essential factor for some protocols, despite the decreased concordance and

  1. TMN-based network management systems for utility telecommunication networks. Pt. 7. Flexible information model for requirement acquisition; TMN ni motozuku denryoku tsushinmo no kanri system. 7. Un'yosha no yokyu wo junan ni torikomu joho model

    Energy Technology Data Exchange (ETDEWEB)

    Otani, T.; Yusa, H.; Yamaoka, K.

    2000-05-01

    The standardization organization is specifying objects solving the knowledge gap that causes system constructions more expensive and time consuming. Although these objects provide general functions, specific requirements are hardly satisfied. In this paper, we propose a flexible information model for requirement acquisition without effects on the standard objects. The proposed information model enables that (1) a process according to a requirement is easy to be added and/or modified, (2) status invoking the process is set, and (3) additional attributes and/or methods are added without modification of other objects. We have constructed the model-based program that reschedules task order and measured its properties using software metrics. The results shows the proposed information model makes existing objects reuse possible and facilitates satisfaction of specific requirements. (author)

  2. Learning Networks, Networked Learning

    NARCIS (Netherlands)

    Sloep, Peter; Berlanga, Adriana

    2010-01-01

    Sloep, P. B., & Berlanga, A. J. (2011). Learning Networks, Networked Learning [Redes de Aprendizaje, Aprendizaje en Red]. Comunicar, XIX(37), 55-63. Retrieved from http://dx.doi.org/10.3916/C37-2011-02-05

  3. (Box-filling-model)-based ONU schedule algorithm and bandwidth-requirement-based ONU transfer mechanism for multi-subsystem-based VPONs' management in metro-access optical network

    Science.gov (United States)

    Zhang, Yuchao; Gan, Chaoqin; Gou, Kaiyu; Hua, Jian

    2017-07-01

    ONU schedule algorithm and ONU transfer mechanism for multi-subsystem-based VPONs' management is proposed in this paper. To avoid frequent wavelength switch and realize high system stability, ONU schedule algorithm is presented for wavelength allocation by introducing box-filling model. At the same time, judgement mechanism is designed to filter wavelength-increased request caused by slight bandwidth fluctuation of VPON. To share remained bandwidth among VPONs, ONU transfer mechanism is put forward according to flexible wavelength routing. To manage wavelength resource of entire network and wavelength requirement from VPONs, information-managed matrix model is constructed. Finally, the effectiveness of the proposed scheme is demonstrated by simulation and analysis.

  4. Optimization and Verification of Droplet Digital PCR Even-Specific Methods for the Quantification of GM Maize DAS1507 and NK603.

    Science.gov (United States)

    Grelewska-Nowotko, Katarzyna; Żurawska-Zajfert, Magdalena; Żmijewska, Ewelina; Sowa, Sławomir

    2017-11-06

    In recent years, digital polymerase chain reaction (dPCR), a new molecular biology technique, has been gaining in popularity. Among many other applications, this technique can also be used for the detection and quantification of genetically modified organisms (GMOs) in food and feed. It might replace the currently widely used real-time PCR method (qPCR), by overcoming problems related to the PCR inhibition and the requirement of certified reference materials to be used as a calibrant. In theory, validated qPCR methods can be easily transferred to the dPCR platform. However, optimization of the PCR conditions might be necessary. In this study, we report the transfer of two validated qPCR methods for quantification of maize DAS1507 and NK603 events to the droplet dPCR (ddPCR) platform. After some optimization, both methods have been verified according to the guidance of the European Network of GMO Laboratories (ENGL) on analytical method verification (ENGL working group on "Method Verification." (2011) Verification of Analytical Methods for GMO Testing When Implementing Interlaboratory Validated Methods). Digital PCR methods performed equally or better than the qPCR methods. Optimized ddPCR methods confirm their suitability for GMO determination in food and feed.

  5. GPU-accelerated voxelwise hepatic perfusion quantification.

    Science.gov (United States)

    Wang, H; Cao, Y

    2012-09-07

    Voxelwise quantification of hepatic perfusion parameters from dynamic contrast enhanced (DCE) imaging greatly contributes to assessment of liver function in response to radiation therapy. However, the efficiency of the estimation of hepatic perfusion parameters voxel-by-voxel in the whole liver using a dual-input single-compartment model requires substantial improvement for routine clinical applications. In this paper, we utilize the parallel computation power of a graphics processing unit (GPU) to accelerate the computation, while maintaining the same accuracy as the conventional method. Using compute unified device architecture-GPU, the hepatic perfusion computations over multiple voxels are run across the GPU blocks concurrently but independently. At each voxel, nonlinear least-squares fitting the time series of the liver DCE data to the compartmental model is distributed to multiple threads in a block, and the computations of different time points are performed simultaneously and synchronically. An efficient fast Fourier transform in a block is also developed for the convolution computation in the model. The GPU computations of the voxel-by-voxel hepatic perfusion images are compared with ones by the CPU using the simulated DCE data and the experimental DCE MR images from patients. The computation speed is improved by 30 times using a NVIDIA Tesla C2050 GPU compared to a 2.67 GHz Intel Xeon CPU processor. To obtain liver perfusion maps with 626 400 voxels in a patient's liver, it takes 0.9 min with the GPU-accelerated voxelwise computation, compared to 110 min with the CPU, while both methods result in perfusion parameters differences less than 10(-6). The method will be useful for generating liver perfusion images in clinical settings.

  6. Utilization of remote sensing techniques for the quantification of fire behavior in two pine stands

    Science.gov (United States)

    Eric V. Mueller; Nicholas Skowronski; Kenneth Clark; Michael Gallagher; Robert Kremens; Jan C. Thomas; Mohamad El Houssami; Alexander Filkov; Rory M. Hadden; William Mell; Albert Simeoni

    2017-01-01

    Quantification of field-scale fire behavior is necessary to improve the current scientific understanding of wildland fires and to develop and test relevant, physics-based models. In particular, detailed descriptions of individual fires are required, for which the available literature is limited. In this work, two such field-scale experiments, carried out in pine stands...

  7. Separation and quantification of microalgal carbohydrates.

    Science.gov (United States)

    Templeton, David W; Quinn, Matthew; Van Wychen, Stefanie; Hyman, Deborah; Laurens, Lieve M L

    2012-12-28

    Structural carbohydrates can constitute a large fraction of the dry weight of algal biomass and thus accurate identification and quantification is important for summative mass closure. Two limitations to the accurate characterization of microalgal carbohydrates are the lack of a robust analytical procedure to hydrolyze polymeric carbohydrates to their respective monomers and the subsequent identification and quantification of those monosaccharides. We address the second limitation, chromatographic separation of monosaccharides, here by identifying optimum conditions for the resolution of a synthetic mixture of 13 microalgae-specific monosaccharides, comprised of 8 neutral, 2 amino sugars, 2 uronic acids and 1 alditol (myo-inositol as an internal standard). The synthetic 13-carbohydrate mix showed incomplete resolution across 11 traditional high performance liquid chromatography (HPLC) methods, but showed improved resolution and accurate quantification using anion exchange chromatography (HPAEC) as well as alditol acetate derivatization followed by gas chromatography (for the neutral- and amino-sugars only). We demonstrate the application of monosaccharide quantification using optimized chromatography conditions after sulfuric acid analytical hydrolysis for three model algae strains and compare the quantification and complexity of monosaccharides in analytical hydrolysates relative to a typical terrestrial feedstock, sugarcane bagasse. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Quantification of trace-level DNA by real-time whole genome amplification.

    Directory of Open Access Journals (Sweden)

    Min-Jung Kang

    Full Text Available Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES, an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  9. Comparison of extraction and quantification methods of perfluorinated compounds in human plasma, serum, and whole blood

    Energy Technology Data Exchange (ETDEWEB)

    Reagen, William K. [3M Environmental Laboratory, 3M Center, Building 0260-05-N-17, St. Paul, MN 55144-1000 (United States)], E-mail: wkreagen@mmm.com; Ellefson, Mark E. [3M Environmental Laboratory, 3M Center, Building 0260-05-N-17, St. Paul, MN 55144-1000 (United States); Kannan, Kurunthachalam [Wadsworth Center, New York State Department of Health and Department of Environmental Health Sciences (United States); State University of New York at Albany, NY 12201-0509 (United States); Giesy, John P. [Department of Veterinary Biomedical Sciences and Toxicology Centre, University of Saskatchewan, 44 Campus Drive, Saskatoon, SK (Canada); Department of Biology and Chemistry, Center for Coastal Pollution and Conservation, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong (China); Zoology Department, National Food Safety and Toxicology Center, Center for Integrative Toxicology, Michigan State University, E. Lansing, MI (United States); School of Environment, Nanjing University, Nanjing (China)

    2008-11-03

    Perfluorinated compounds are ubiquitous in the environment and have been reported to occur in human blood. Accurate risk assessments require accurate measurements of exposures, but identification and quantification of PFCs in biological matrices can be affected by both ion suppression and enhancement in liquid chromatography-tandem mass spectrometry techniques (LC/MS-MS). A study was conducted to quantify potential biases in LC/MS-MS quantification methods. Using isotopically labeled perfluorooctanoic acid ([{sup 13}C{sub 2}]-PFOA), perfluorononanoic acid ([{sup 13}C{sub 2}]-PFNA), and ammonium perfluorooctanesulfonate ([{sup 18}O{sub 2}]-PFOS) spiked tissues, ion-pairing extraction, solid-phase extraction, and protein precipitation sample preparation techniques were compared. Analytical accuracy was assessed using both solvent calibration and matrix-matched calibration for quantification. Data accuracy and precision of 100 {+-} 15% was demonstrated in both human sera and plasma for all three sample preparation techniques when matrix-matched calibration was used in quantification. In contrast, quantification of ion-pairing extraction data using solvent calibration in combination with a surrogate internal standard resulted in significant analytical biases for all target analytes. The accuracy of results, based on solvent calibration was highly variable and dependent on the serum and plasma matrices, the specific target analyte [{sup 13}C{sub 2}]-PFOA, [{sup 13}C{sub 2}]-PFNA, or [{sup 18}O{sub 2}]-PFOS, the target analyte concentration, the LC/MS-MS instrumentation used in data generation, and the specific surrogate internal standard used in quantification. These results suggest that concentrations of PFCs reported for human blood using surrogate internal standards in combination with external solvent calibration can be inaccurate unless biases are accounted for in data quantification.

  10. The Regulation of Cytokine Networks in Hippocampal CA1 Differentiates Extinction from Those Required for the Maintenance of Contextual Fear Memory after Recall.

    Directory of Open Access Journals (Sweden)

    Birger Scholz

    Full Text Available We investigated the distinctiveness of gene regulatory networks in CA1 associated with the extinction of contextual fear memory (CFM after recall using Affymetrix GeneChip Rat Genome 230 2.0 Arrays. These data were compared to previously published retrieval and reconsolidation-attributed, and consolidation datasets. A stringent dual normalization and pareto-scaled orthogonal partial least-square discriminant multivariate analysis together with a jack-knifing-based cross-validation approach was used on all datasets to reduce false positives. Consolidation, retrieval and extinction were correlated with distinct patterns of gene expression 2 hours later. Extinction-related gene expression was most distinct from the profile accompanying consolidation. A highly specific feature was the discrete regulation of neuroimmunological gene expression associated with retrieval and extinction. Immunity-associated genes of the tyrosine kinase receptor TGFβ and PDGF, and TNF families' characterized extinction. Cytokines and proinflammatory interleukins of the IL-1 and IL-6 families were enriched with the no-extinction retrieval condition. We used comparative genomics to predict transcription factor binding sites in proximal promoter regions of the retrieval-regulated genes. Retrieval that does not lead to extinction was associated with NF-κB-mediated gene expression. We confirmed differential NF-κBp65 expression, and activity in all of a representative sample of our candidate genes in the no-extinction condition. The differential regulation of cytokine networks after the acquisition and retrieval of CFM identifies the important contribution that neuroimmune signalling plays in normal hippocampal function. Further, targeting cytokine signalling upon retrieval offers a therapeutic strategy to promote extinction mechanisms in human disorders characterised by dysregulation of associative memory.

  11. Protein quantification in the presence of poly(ethylene glycol) and dextran using the Bradford method.

    Science.gov (United States)

    Barbosa, Helder; Slater, Nigel K H; Marcos, João C

    2009-12-01

    Some experimental methodologies require the quantification of protein in the presence of polymers like poly(ethylene glycol) (PEG) and dextran (DEX). In the aqueous two-phase system (ATPS) extraction of biomolecules, the interference of these phase-forming polymers on the Bradford quantification assay is commonly recognized. However, how these polymers interfere has not been reported hitherto. In this study we show that while dextran concentrations of 20% (w/w) can be used without error, loss of accuracy occurs for solutions with PEG concentrations >10% (w/w). Above this value a substantial decrease on the assay sensitivity is observed.

  12. Network topology analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Kalb, Jeffrey L.; Lee, David S.

    2008-01-01

    Emerging high-bandwidth, low-latency network technology has made network-based architectures both feasible and potentially desirable for use in satellite payload architectures. The selection of network topology is a critical component when developing these multi-node or multi-point architectures. This study examines network topologies and their effect on overall network performance. Numerous topologies were reviewed against a number of performance, reliability, and cost metrics. This document identifies a handful of good network topologies for satellite applications and the metrics used to justify them as such. Since often multiple topologies will meet the requirements of the satellite payload architecture under development, the choice of network topology is not easy, and in the end the choice of topology is influenced by both the design characteristics and requirements of the overall system and the experience of the developer.

  13. Mixture quantification using PLS in plastic scintillation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bagan, H.; Tarancon, A.; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F., E-mail: jfgarcia@ub.ed [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2011-06-15

    This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ({sup 241}Am, {sup 137}Cs and {sup 90}Sr/{sup 90}Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter.

  14. Spatial gene expression quantification: a tool for analysis of in situ hybridizations in sea anemone Nematostella vectensis

    Directory of Open Access Journals (Sweden)

    Botman Daniel

    2012-10-01

    Full Text Available Abstract Background Spatial gene expression quantification is required for modeling gene regulation in developing organisms. The fruit fly Drosophila melanogaster is the model system most widely applied for spatial gene expression analysis due to its unique embryonic properties: the shape does not change significantly during its early cleavage cycles and most genes are differentially expressed along a straight axis. This system of development is quite exceptional in the animal kingdom. In the sea anemone Nematostella vectensis the embryo changes its shape during early development; there are cell divisions and cell movement, like in most other metazoans. Nematostella is an attractive case study for spatial gene expression since its transparent body wall makes it accessible to various imaging techniques. Findings Our new quantification method produces standardized gene expression profiles from raw or annotated Nematostella in situ hybridizations by measuring the expression intensity along its cell layer. The procedure is based on digital morphologies derived from high-resolution fluorescence pictures. Additionally, complete descriptions of nonsymmetric expression patterns have been constructed by transforming the gene expression images into a three-dimensional representation. Conclusions We created a standard format for gene expression data, which enables quantitative analysis of in situ hybridizations from embryos with various shapes in different developmental stages. The obtained expression profiles are suitable as input for optimization of gene regulatory network models, and for correlation analysis of genes from dissimilar Nematostella morphologies. This approach is potentially applicable to many other metazoan model organisms and may also be suitable for processing data from three-dimensional imaging techniques.

  15. Model complexities and requirements for multimodal transport network design: assessment of classical, state-of-the-practice, and state-of-the-research models

    NARCIS (Netherlands)

    van Eck, G.; Brands, Ties; Wismans, Luc Johannes Josephus; Pel, A.J.; van Nes, R.

    2014-01-01

    In the aim for a more sustainable transport system, governments try to stimulate multimodal trip making by facilitating smooth transfers between modes. The assessment of related multimodal policy measures requires transport models that are capable of handling the complex nature of multimodality.

  16. Model complexities and requirements for multimodal transport network design : Assessment of classical, state-of-the-practice, and state-of-the-research models

    NARCIS (Netherlands)

    Van Eck, G.; Brands, T.; Wismans, L.J.J.; Pel, A.J.; Van Nes, R.

    2014-01-01

    In the aim for a more sustainable transport system, governments try to stimulate multimodal trip making by facilitating smooth transfers between modes. The assessment of related multimodal policy measures requires transport models that are capable of handling the complex nature of multimodality.

  17. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... expected based on the information by the manufacturers. UV spectrometry, SYBR-Green dye staining, slot blot and RB1 rt-PCR gave 39, 27, 11 and 12%, respectively, higher concentrations than expected based on the manufacturers' information. The DNA preparations were quantified using the Quantifiler Human DNA...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...

  18. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  19. Virtualized Network Control (VNC)

    Energy Technology Data Exchange (ETDEWEB)

    Lehman, Thomas [Univ. of Southern California, Los Angeles, CA (United States); Guok, Chin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghani, Nasir [Univ. of New Mexico, Albuquerque, NM (United States)

    2013-01-31

    The focus of this project was on the development of a "Network Service Plane" as an abstraction model for the control and provisioning of multi-layer networks. The primary motivation for this work were the requirements of next generation networked applications which will need to access advanced networking as a first class resource at the same level as compute and storage resources. A new class of "Intelligent Network Services" were defined in order to facilitate the integration of advanced network services into application specific workflows. This new class of network services are intended to enable real-time interaction between the application co-scheduling algorithms and the network for the purposes of workflow planning, real-time resource availability identification, scheduling, and provisioning actions.

  20. Class network routing

    Science.gov (United States)

    Bhanot, Gyan [Princeton, NJ; Blumrich, Matthias A [Ridgefield, CT; Chen, Dong [Croton On Hudson, NY; Coteus, Paul W [Yorktown Heights, NY; Gara, Alan G [Mount Kisco, NY; Giampapa, Mark E [Irvington, NY; Heidelberger, Philip [Cortlandt Manor, NY; Steinmacher-Burow, Burkhard D [Mount Kisco, NY; Takken, Todd E [Mount Kisco, NY; Vranas, Pavlos M [Bedford Hills, NY

    2009-09-08

    Class network routing is implemented in a network such as a computer network comprising a plurality of parallel compute processors at nodes thereof. Class network routing allows a compute processor to broadcast a message to a range (one or more) of other compute processors in the computer network, such as processors in a column or a row. Normally this type of operation requires a separate message to be sent to each processor. With class network routing pursuant to the invention, a single message is sufficient, which generally reduces the total number of messages in the network as well as the latency to do a broadcast. Class network routing is also applied to dense matrix inversion algorithms on distributed memory parallel supercomputers with hardware class function (multicast) capability. This is achieved by exploiting the fact that the communication patterns of dense matrix inversion can be served by hardware class functions, which results in faster execution times.

  1. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    Directory of Open Access Journals (Sweden)

    Ruolin Liu

    2017-11-01

    Full Text Available We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression.Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  2. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  3. Metal Stable Isotope Tagging: Renaissance of Radioimmunoassay for Multiplex and Absolute Quantification of Biomolecules.

    Science.gov (United States)

    Liu, Rui; Zhang, Shixi; Wei, Chao; Xing, Zhi; Zhang, Sichun; Zhang, Xinrong

    2016-05-17

    is the development and application of the mass cytometer, which fully exploited the multiplexing potential of metal stable isotope tagging. It realized the simultaneous detection of dozens of parameters in single cells, accurate immunophenotyping in cell populations, through modeling of intracellular signaling network and undoubted discrimination of function and connection of cell subsets. Metal stable isotope tagging has great potential applications in hematopoiesis, immunology, stem cells, cancer, and drug screening related research and opened a post-fluorescence era of cytometry. Herein, we review the development of biomolecule quantification using metal stable isotope tagging. Particularly, the power of multiplex and absolute quantification is demonstrated. We address the advantages, applicable situations, and limitations of metal stable isotope tagging strategies and propose suggestions for future developments. The transfer of enzymatic or fluorescent tagging to metal stable isotope tagging may occur in many aspects of biological and clinical practices in the near future, just as the revolution from radioactive isotope tagging to fluorescent tagging happened in the past.

  4. Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule and Other Revisions to Part B for CY 2017; Medicare Advantage Bid Pricing Data Release; Medicare Advantage and Part D Medical Loss Ratio Data Release; Medicare Advantage Provider Network Requirements; Expansion of Medicare Diabetes Prevention Program Model; Medicare Shared Savings Program Requirements. Final rule.

    Science.gov (United States)

    2016-11-15

    This major final rule addresses changes to the physician fee schedule and other Medicare Part B payment policies, such as changes to the Value Modifier, to ensure that our payment systems are updated to reflect changes in medical practice and the relative value of services, as well as changes in the statute. This final rule also includes changes related to the Medicare Shared Savings Program, requirements for Medicare Advantage Provider Networks, and provides for the release of certain pricing data from Medicare Advantage bids and of data from medical loss ratio reports submitted by Medicare health and drug plans. In addition, this final rule expands the Medicare Diabetes Prevention Program model.

  5. Designing Networked Energy Infrastructures with Architectural Flexibility

    NARCIS (Netherlands)

    Melese, Y.G.; Heijnen, P.W.; Stikkelman, R.M.

    2014-01-01

    Development of networked energy infrastructures (like gas pipe networks), generally requires a significant amount of capital investment under resources, market and institutional uncertainties. Several independent suppliers and consumers are to be connected into these networks. However, the actual

  6. Robust design requirements specification: a quantitative method for requirements development using quality loss functions

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Christensen, Martin Ebro; Howard, Thomas J.

    2016-01-01

    the requirements are for interpretation. By applying the method and indicator to a case study from the medical device industry, it was found that less than 45% of the potential for quantification had been utilised. Finally, the robust design requirements specification method was successfully applied to three case...

  7. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... One of the most important problems to produce chili crops is the presence of diseases caused by pathogen agents, such as viruses, therefore, there is a substantial necessity to better predict the behavior of the diseases of these crops, determining a more precise quantification of the disease's syndrome that ...

  8. The quantificational asymmetry: A comparative look

    NARCIS (Netherlands)

    van Koert, M.; Koeneman, O.; Weerman, F.; Hulk, A.

    2015-01-01

    The traditional account of the Delay of Principle B Effect (DPBE) predicts that all languages that show a DPBE will also reveal a Quantificational Asymmetry (QA). Children's performance on object-pronouns must therefore improve when a QP-subject replaces the NP-subject. These QA results have been

  9. Quantification of glycyrrhizin biomarker in Glycyrrhiza glabra ...

    African Journals Online (AJOL)

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 ...

  10. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  11. Quantification of 20-hydroxyeicosatetraenoic acid by colorimetric ...

    Indian Academy of Sciences (India)

    Quantification of 20-hydroxyeicosatetraenoic acid by colorimetric competitive enzyme linked immunosorbent assay. Harry E Grates Richard M Mc Gowen ... Assays were developed with and without a proprietary enhancer solution which allows for the extraction-free measurement of 20-HETE in urine samples. The bound ...

  12. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  13. Measure the structure similarity of nodes in complex networks based on relative entropy

    Science.gov (United States)

    Zhang, Qi; Li, Meizhu; Deng, Yong

    2018-02-01

    Similarity of nodes is a basic structure quantification in complex networks. Lots of methods in research on complex networks are based on nodes' similarity such as node's classification, network's community structure detection, network's link prediction and so on. Therefore, how to measure nodes' similarity is an important problem in complex networks. In this paper, a new method is proposed to measure nodes' structure similarity based on relative entropy and each node's local structure. In the new method, each node's structure feature can be quantified as a special kind of information. The quantification of similarity between different pair of nodes can be replaced as the quantification of similarity in structural information. Then relative entropy is used to measure the difference between each pair of nodes' structural information. At last the value of relative entropy between each pair of nodes is used to measure nodes' structure similarity in complex networks. Comparing with existing methods the new method is more accuracy to measure nodes' structure similarity.

  14. Sensor Network Motes:

    DEFF Research Database (Denmark)

    Leopold, Martin

    . In addition, we present our results from porting the highly popular sensor network operating system TinyOS to a new and emerging system on a chip based platform. Moving the sensor network field towards the use of system-on- a-chip devices has large potential in terms of price and performance. We claim to have......This dissertation describes our efforts to improve sensor network performance evaluation and portability, within the context of the sensor network project Hogthrob. In Hogthrob, we faced the challenge of building an sensor network architecture for sow monitoring. This application has hard...... requirements on price and performance, and shows great potential for using sensor networks. Throughout the project we let the application requirements guide our design choices, leading us to push the technologies further to meet the specific goal of the application. In this dissertation, we attack two key...

  15. An uncertainty inventory demonstration - a primary step in uncertainty quantification

    Energy Technology Data Exchange (ETDEWEB)

    Langenbrunner, James R. [Los Alamos National Laboratory; Booker, Jane M [Los Alamos National Laboratory; Hemez, Francois M [Los Alamos National Laboratory; Salazar, Issac F [Los Alamos National Laboratory; Ross, Timothy J [UNM

    2009-01-01

    Tools, methods, and theories for assessing and quantifying uncertainties vary by application. Uncertainty quantification tasks have unique desiderata and circumstances. To realistically assess uncertainty requires the engineer/scientist to specify mathematical models, the physical phenomena of interest, and the theory or framework for assessments. For example, Probabilistic Risk Assessment (PRA) specifically identifies uncertainties using probability theory, and therefore, PRA's lack formal procedures for quantifying uncertainties that are not probabilistic. The Phenomena Identification and Ranking Technique (PIRT) proceeds by ranking phenomena using scoring criteria that results in linguistic descriptors, such as importance ranked with words, 'High/Medium/Low.' The use of words allows PIRT to be flexible, but the analysis may then be difficult to combine with other uncertainty theories. We propose that a necessary step for the development of a procedure or protocol for uncertainty quantification (UQ) is the application of an Uncertainty Inventory. An Uncertainty Inventory should be considered and performed in the earliest stages of UQ.

  16. Absolute Quantification of Amyloid Propagons by Digital Microfluidics.

    Science.gov (United States)

    Pfammatter, Manuela; Andreasen, Maria; Meisl, Georg; Taylor, Christopher G; Adamcik, Jozef; Bolisetty, Sreenath; Sánchez-Ferrer, Antoni; Klenerman, David; Dobson, Christopher M; Mezzenga, Raffaele; Knowles, Tuomas P J; Aguzzi, Adriano; Hornemann, Simone

    2017-11-21

    The self-replicating properties of proteins into amyloid fibrils is a common phenomenon and underlies a variety of neurodegenerative diseases. Because propagation-active fibrils are chemically indistinguishable from innocuous aggregates and monomeric precursors, their detection requires measurements of their replicative capacity. Here we present a digital amyloid quantitative assay (d-AQuA) with insulin as model protein for the absolute quantification of single replicative units, propagons. D-AQuA is a microfluidics-based technology that performs miniaturized simultaneous propagon-induced amplification chain reactions within hundreds to thousands of picoliter-sized droplets. At limiting dilutions, the d-AQuA reactions follow a stochastic regime indicative of the detection of single propagons. D-AQuA thus enables absolute quantification of single propagons present in a given sample at very low concentrations. The number of propagons quantified by d-AQuA was similar to that of fibrillar insulin aggregates detected by atomic-force microscopy and to an equivalent microplate-based assay, providing independent evidence for the identity of insulin propagons with a subset of morphologically defined protein aggregates. The sensitivity, precision, and accuracy of d-AQuA enable it to be suitable for multiple biotechnological and medical applications.

  17. Capacity Limit, Link Scheduling and Power Control in Wireless Networks

    Science.gov (United States)

    Zhou, Shan

    2013-01-01

    The rapid advancement of wireless technology has instigated the broad deployment of wireless networks. Different types of networks have been developed, including wireless sensor networks, mobile ad hoc networks, wireless local area networks, and cellular networks. These networks have different structures and applications, and require different…

  18. Learning OpenStack networking (Neutron)

    CERN Document Server

    Denton, James

    2014-01-01

    If you are an OpenStack-based cloud operator with experience in OpenStack Compute and nova-network but are new to Neutron networking, then this book is for you. Some networking experience is recommended, and a physical network infrastructure is required to provide connectivity to instances and other network resources configured in the book.

  19. Wireless networked music performance

    CERN Document Server

    Gabrielli, Leonardo

    2016-01-01

    This book presents a comprehensive overview of the state of the art in Networked Music Performance (NMP) and a historical survey of computer music networking. It introduces current technical trends in NMP and technical issues yet to be addressed. It also lists wireless communication protocols and compares these to the requirements of NMP. Practical use cases and advancements are also discussed.

  20. Collaborative learning in networks.

    Science.gov (United States)

    Mason, Winter; Watts, Duncan J

    2012-01-17

    Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.

  1. A computer-aided detection system for rheumatoid arthritis MRI data interpretation and quantification of synovial activity

    DEFF Research Database (Denmark)

    Kubassove, Olga; Boesen, Mikael; Cimmino, Marco A

    2009-01-01

    RATIONAL AND OBJECTIVE: Disease assessment and follow-up of rheumatoid arthritis (RA) patients require objective evaluation and quantification. Magnetic resonance imaging (MRI) has a large potential to supplement such information for the clinician, however, time spent on data reading and interpre......RATIONAL AND OBJECTIVE: Disease assessment and follow-up of rheumatoid arthritis (RA) patients require objective evaluation and quantification. Magnetic resonance imaging (MRI) has a large potential to supplement such information for the clinician, however, time spent on data reading......, Dynamika-RA, which incorporates efficient data processing and analysis techniques....

  2. Network testbed creation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-03-21

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  3. Network testbed creation and validation

    Science.gov (United States)

    Thai, Tan Q.; Urias, Vincent; Van Leeuwen, Brian P.; Watts, Kristopher K.; Sweeney, Andrew John

    2017-04-18

    Embodiments of network testbed creation and validation processes are described herein. A "network testbed" is a replicated environment used to validate a target network or an aspect of its design. Embodiments describe a network testbed that comprises virtual testbed nodes executed via a plurality of physical infrastructure nodes. The virtual testbed nodes utilize these hardware resources as a network "fabric," thereby enabling rapid configuration and reconfiguration of the virtual testbed nodes without requiring reconfiguration of the physical infrastructure nodes. Thus, in contrast to prior art solutions which require a tester manually build an emulated environment of physically connected network devices, embodiments receive or derive a target network description and build out a replica of this description using virtual testbed nodes executed via the physical infrastructure nodes. This process allows for the creation of very large (e.g., tens of thousands of network elements) and/or very topologically complex test networks.

  4. Hepatic Iron Quantification on 3 Tesla (3 T Magnetic Resonance (MR: Technical Challenges and Solutions

    Directory of Open Access Journals (Sweden)

    Muhammad Anwar

    2013-01-01

    Full Text Available MR has become a reliable and noninvasive method of hepatic iron quantification. Currently, most of the hepatic iron quantification is performed on 1.5 T MR, and the biopsy measurements have been paired with R2 and R2* values for 1.5 T MR. As the use of 3 T MR scanners is steadily increasing in clinical practice, it has become important to evaluate the practicality of calculating iron burden at 3 T MR. Hepatic iron quantification on 3 T MR requires a better understanding of the process and more stringent technical considerations. The purpose of this work is to focus on the technical challenges in establishing a relationship between T2* values at 1.5 T MR and 3 T MR for hepatic iron concentration (HIC and to develop an appropriately optimized MR protocol for the evaluation of T2* values in the liver at 3 T magnetic field strength. We studied 22 sickle cell patients using multiecho fast gradient-echo sequence (MFGRE 3 T MR and compared the results with serum ferritin and liver biopsy results. Our study showed that the quantification of hepatic iron on 3 T MRI in sickle cell disease patients correlates well with clinical blood test results and biopsy results. 3 T MR liver iron quantification based on MFGRE can be used for hepatic iron quantification in transfused patients.

  5. WaveletQuant, an improved quantification software based on wavelet signal threshold de-noising for labeled quantitative proteomic analysis

    Directory of Open Access Journals (Sweden)

    Li Song

    2010-04-01

    Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.

  6. Scalable Virtual Network Mapping Algorithm for Internet-Scale Networks

    Science.gov (United States)

    Yang, Qiang; Wu, Chunming; Zhang, Min

    The proper allocation of network resources from a common physical substrate to a set of virtual networks (VNs) is one of the key technical challenges of network virtualization. While a variety of state-of-the-art algorithms have been proposed in an attempt to address this issue from different facets, the challenge still remains in the context of large-scale networks as the existing solutions mainly perform in a centralized manner which requires maintaining the overall and up-to-date information of the underlying substrate network. This implies the restricted scalability and computational efficiency when the network scale becomes large. This paper tackles the virtual network mapping problem and proposes a novel hierarchical algorithm in conjunction with a substrate network decomposition approach. By appropriately transforming the underlying substrate network into a collection of sub-networks, the hierarchical virtual network mapping algorithm can be carried out through a global virtual network mapping algorithm (GVNMA) and a local virtual network mapping algorithm (LVNMA) operated in the network central server and within individual sub-networks respectively with their cooperation and coordination as necessary. The proposed algorithm is assessed against the centralized approaches through a set of numerical simulation experiments for a range of network scenarios. The results show that the proposed hierarchical approach can be about 5-20 times faster for VN mapping tasks than conventional centralized approaches with acceptable communication overhead between GVNCA and LVNCA for all examined networks, whilst performs almost as well as the centralized solutions.

  7. Quantification of rural livelihood dynamics

    DEFF Research Database (Denmark)

    Walelign, Solomon Zena

    Improved understanding of rural livelihoods is required to reduce rural poverty faster. To that end, this PhD study quantified rural livelihood dynamics emphasizing (i) the role of environmental resources use in helping rural households to escape poverty, (ii) development of a new approach...... role in lifting poor out poverty which could be due to restricted access to more remunerative environmental resources, (ii) the developed approach for livelihood clustering (combining household income and asset variables using regression models) outperform both existing income and asset approaches (iii......-movers’ were more important than ‘movers’ to rural livelihood studies and the cost of tracking ‘non-movers’ were negligible relative to the cost of tracking ‘movers’. Hence, from the viewpoint of poverty reduction, the study recommends (i) access restrictions should be loosened in order to enhance the role...

  8. Capacitive immunosensor for C-reactive protein quantification

    KAUST Repository

    Sapsanis, Christos

    2015-08-02

    We report an agglutination-based immunosensor for the quantification of C-reactive protein (CRP). The developed immunoassay sensor requires approximately 15 minutes of assay time per sample and provides a sensitivity of 0.5 mg/L. We have measured the capacitance of interdigitated electrodes (IDEs) and quantified the concentration of added analyte. The proposed method is a label free detection method and hence provides rapid measurement preferable in diagnostics. We have so far been able to quantify the concentration to as low as 0.5 mg/L and as high as 10 mg/L. By quantifying CRP in serum, we can assess whether patients are prone to cardiac diseases and monitor the risk associated with such diseases. The sensor is a simple low cost structure and it can be a promising device for rapid and sensitive detection of disease markers at the point-of-care stage.

  9. Uncertainty quantification in computational fluid dynamics and aircraft engines

    CERN Document Server

    Montomoli, Francesco; D'Ammaro, Antonio; Massini, Michela; Salvadori, Simone

    2015-01-01

    This book introduces novel design techniques developed to increase the safety of aircraft engines. The authors demonstrate how the application of uncertainty methods can overcome problems in the accurate prediction of engine lift, caused by manufacturing error. This in turn ameliorates the difficulty of achieving required safety margins imposed by limits in current design and manufacturing methods. This text shows that even state-of-the-art computational fluid dynamics (CFD) are not able to predict the same performance measured in experiments; CFD methods assume idealised geometries but ideal geometries do not exist, cannot be manufactured and their performance differs from real-world ones. By applying geometrical variations of a few microns, the agreement with experiments improves dramatically, but unfortunately the manufacturing errors in engines or in experiments are unknown. In order to overcome this limitation, uncertainty quantification considers the probability density functions of manufacturing errors...

  10. Protocol for Quantification of Defects in Natural Fibres for Composites

    DEFF Research Database (Denmark)

    Mortensen, Ulrich Andreas; Madsen, Bo

    2014-01-01

    to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition......Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based...... on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested...

  11. Declarative Networking

    CERN Document Server

    Loo, Boon Thau

    2012-01-01

    Declarative Networking is a programming methodology that enables developers to concisely specify network protocols and services, which are directly compiled to a dataflow framework that executes the specifications. Declarative networking proposes the use of a declarative query language for specifying and implementing network protocols, and employs a dataflow framework at runtime for communication and maintenance of network state. The primary goal of declarative networking is to greatly simplify the process of specifying, implementing, deploying and evolving a network design. In addition, decla

  12. Beyond Space For Spatial Networks

    CERN Document Server

    Expert, Paul; Blondel, Vincent D; Lambiotte, Renaud

    2010-01-01

    Many complex systems are organized in the form of a network embedded in space. Important examples include the physical Internet infrastucture, road networks, flight connections, brain functional networks and social networks. The effect of space on network topology has recently come under the spotlight because of the emergence of pervasive technologies based on geo-localization, which constantly fill databases with people's movements and thus reveal their trajectories and spatial behaviour. Extracting patterns and regularities from the resulting massive amount of human mobility data requires the development of appropriate tools for uncovering information in spatially-embedded networks. In contrast with most works that tend to apply standard network metrics to any type of network, we argue in this paper for a careful treatment of the constraints imposed by space on network topology. In particular, we focus on the problem of community detection and propose a modularity function adapted to spatial networks. We sh...

  13. Network security risk level

    Directory of Open Access Journals (Sweden)

    Emil BURTESCU

    2006-01-01

    Full Text Available The advantages of the existence of a computers network within any company with pretensions are obvious. But the construction and the existence of a network without meeting some minimum security requirements, although it would be preferable to be optimal, can lead to bad functioning in the performance of the company’s business. The vulnerability of a grouping, such as a network, is given by the weakest point in its competence. The establishing of the risk level of each component of the network, and implicitly of the grouping, is highly necessary

  14. National Transparent Optical Network Consortium (NTONC)

    National Research Council Canada - National Science Library

    Daspit, Paul

    2004-01-01

    ... (DWDM) transport, switching technologies and control strategies required to develop, deploy and operate the terabit per second optical networks needed to meet requirements of Next Generation Internet applications...

  15. Achievement of Weight Loss and Other Requirements of the Diabetes Prevention and Recognition Program: A National Diabetes Prevention Program Network Based on Nationally Certified Diabetes Self-management Education Programs.

    Science.gov (United States)

    DiBenedetto, Joanna Craver; Blum, Natalie M; O'Brian, Catherine A; Kolb, Leslie E; Lipman, Ruth D

    2016-12-01

    The purpose of this report is (1) to describe the use of the American Association of Diabetes Educators' (AADE's) model of implementation of the National Diabetes Prevention Program through nationally certified diabetes self-management education (DSME) programs and (2) to report the aggregated program outcomes as defined by the Diabetes Prevention and Recognition Program standards of the Centers for Disease Control and Prevention (CDC). In 2012, the AADE worked with the CDC to select 30 certified DSME programs for National Diabetes Prevention Program delivery. For the following 3 years, the AADE continued to work with 25 of the 30 original programs. Results for all CDC recognition standards have been collected from these 25 programs and analyzed as aggregated data over the course of 36 months. At the end of the full-year program, average percentage body weight loss for participants across all 25 programs exceeded the CDC's minimum requirement of 5% weight loss. All programs on average met the CDC requirements for program attendance. Increasing access to the National Diabetes Prevention Program, through an array of networks, including certified DSME programs, will better ensure that people are able to engage in an effective approach to reducing their risk of diabetes. © 2016 The Author(s).

  16. Design and value of service oriented technologies for smart business networking

    NARCIS (Netherlands)

    Alt, R.; Smits, M.T.; Beverungen, D.; Tuunanen, T.; Wijnhoven, F.

    2014-01-01

    Business networks that effectively use technologies and outperform competing networks are known as smart business networks. Theory hypothesizes that smart business networking requires a ‘Networked Business Operating System’ (NBOS), a technological architecture consisting of business logic, that

  17. Underage Children and Social Networking

    Science.gov (United States)

    Weeden, Shalynn; Cooke, Bethany; McVey, Michael

    2013-01-01

    Despite minimum age requirements for joining popular social networking services such as Facebook, many students misrepresent their real ages and join as active participants in the networks. This descriptive study examines the use of social networking services (SNSs) by children under the age of 13. The researchers surveyed a sample of 199…

  18. Requirements management: A CSR's perspective

    Science.gov (United States)

    Thompson, Joanie

    1991-01-01

    The following subject areas are covered: customer service overview of network service request processing; Customer Service Representative (CSR) responsibility matrix; extract from a sample Memorandum of Understanding; Network Service Request Form and its instructions sample notification of receipt; and requirements management in the NASA Science Internet.

  19. Advancing image quantification methods and tools for analysis of nanoparticle electrokinetics

    Directory of Open Access Journals (Sweden)

    D. J. Bakewell

    2013-10-01

    Full Text Available Image processing methods and techniques for high-throughput quantification of dielectrophoretic (DEP collections onto planar castellated electrode arrays are developed and evaluated. Fluorescence-based dielectrophoretic spectroscopy is an important tool for laboratory investigations of AC electrokinetic properties of nanoparticles. This paper details new, first principle, theoretical and experimental developments of geometric feature recognition techniques that enable quantification of positive dielectrophoretic (pDEP nanoparticle collections onto castellated arrays. As an alternative to the geometric-based method, novel statistical methods that do not require any information about array features, are also developed using the quantile and standard deviation functions. Data from pDEP collection and release experiments using 200 nm diameter latex nanospheres demonstrates that pDEP quantification using the statistic-based methods yields quantitatively similar results to the geometric-based method. The development of geometric- and statistic-based quantification methods enables high-throughput, supervisor-free image processing tools critical for dielectrophoretic spectroscopy and automated DEP technology development.

  20. Advancing image quantification methods and tools for analysis of nanoparticle electrokinetics

    Science.gov (United States)

    Bakewell, D. J.; Bailey, J.; Holmes, D.

    2013-10-01

    Image processing methods and techniques for high-throughput quantification of dielectrophoretic (DEP) collections onto planar castellated electrode arrays are developed and evaluated. Fluorescence-based dielectrophoretic spectroscopy is an important tool for laboratory investigations of AC electrokinetic properties of nanoparticles. This paper details new, first principle, theoretical and experimental developments of geometric feature recognition techniques that enable quantification of positive dielectrophoretic (pDEP) nanoparticle collections onto castellated arrays. As an alternative to the geometric-based method, novel statistical methods that do not require any information about array features, are also developed using the quantile and standard deviation functions. Data from pDEP collection and release experiments using 200 nm diameter latex nanospheres demonstrates that pDEP quantification using the statistic-based methods yields quantitatively similar results to the geometric-based method. The development of geometric- and statistic-based quantification methods enables high-throughput, supervisor-free image processing tools critical for dielectrophoretic spectroscopy and automated DEP technology development.

  1. Forest Carbon Leakage Quantification Methods and Their Suitability for Assessing Leakage in REDD

    Directory of Open Access Journals (Sweden)

    Sabine Henders

    2012-01-01

    Full Text Available This paper assesses quantification methods for carbon leakage from forestry activities for their suitability in leakage accounting in a future Reducing Emissions from Deforestation and Forest Degradation (REDD mechanism. To that end, we first conducted a literature review to identify specific pre-requisites for leakage assessment in REDD. We then analyzed a total of 34 quantification methods for leakage emissions from the Clean Development Mechanism (CDM, the Verified Carbon Standard (VCS, the Climate Action Reserve (CAR, the CarbonFix Standard (CFS, and from scientific literature sources. We screened these methods for the leakage aspects they address in terms of leakage type, tools used for quantification and the geographical scale covered. Results show that leakage methods can be grouped into nine main methodological approaches, six of which could fulfill the recommended REDD leakage requirements if approaches for primary and secondary leakage are combined. The majority of methods assessed, address either primary or secondary leakage; the former mostly on a local or regional and the latter on national scale. The VCS is found to be the only carbon accounting standard at present to fulfill all leakage quantification requisites in REDD. However, a lack of accounting methods was identified for international leakage, which was addressed by only two methods, both from scientific literature.

  2. A whole-cell electrochemical biosensing system based on bacterial inward electron flow for fumarate quantification.

    Science.gov (United States)

    Si, Rong-Wei; Zhai, Dan-Dan; Liao, Zhi-Hong; Gao, Lu; Yong, Yang-Chun

    2015-06-15

    Fumarate is of great importance as it is an oncometabolite as well as food spoilage indicator. However, cost-effective and fast quantification method for fumarate is lacking although it is urgently required. This work developed an electrochemical whole-cell biosensing system for fumarate quantification. A sensitive inwards electric output (electron flow from electrode into bacteria) responded to fumarate in Shewanella oneidensis MR-1 was characterized, and an electrochemical fumarate biosensing system was developed without genetic engineering. The biosensing system delivered symmetric current peak immediately upon fumarate addition, where the peak area increased in proportion to the increasing fumarate concentration with a wide range of 2 μM-10 mM (R(2)=0.9997). The limit of detection (LOD) and the limit of quantification (LOQ) are 0.83 μM and 1.2 μM, respectively. This biosensing system displayed remarkable specificity to fumarate against other possible interferences. It was also successfully applied to samples of apple juice and kidney tissue. This study added new dimension to electrochemical biosensor design, and provide a simple, cost-effective, fast and robust tool for fumarate quantification. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Bio-inspired networking

    CERN Document Server

    Câmara, Daniel

    2015-01-01

    Bio-inspired techniques are based on principles, or models, of biological systems. In general, natural systems present remarkable capabilities of resilience and adaptability. In this book, we explore how bio-inspired methods can solve different problems linked to computer networks. Future networks are expected to be autonomous, scalable and adaptive. During millions of years of evolution, nature has developed a number of different systems that present these and other characteristics required for the next generation networks. Indeed, a series of bio-inspired methods have been successfully used to solve the most diverse problems linked to computer networks. This book presents some of these techniques from a theoretical and practical point of view. Discusses the key concepts of bio-inspired networking to aid you in finding efficient networking solutions Delivers examples of techniques both in theoretical concepts and practical applications Helps you apply nature's dynamic resource and task management to your co...

  4. Reliability quantification and visualization for electric microgrids

    Science.gov (United States)

    Panwar, Mayank

    and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.

  5. Quantification of Urine Elimination Behaviors in Cats with a Video Recording System

    OpenAIRE

    R. Dulaney, D.; Hopfensperger, M.; Malinowski, R.; Hauptman, J.; Kruger, J M

    2017-01-01

    Background Urinary disorders in cats often require subjective caregiver quantification of clinical signs to establish a diagnosis and monitor therapeutic outcomes. Objective To investigate use of a video recording system (VRS) to better assess and quantify urination behaviors in cats. Animals Eleven healthy cats and 8 cats with disorders potentially associated with abnormal urination patterns. Methods Prospective study design. Litter box urination behaviors were quantified with a VRS for 14 d...

  6. Quantification of organ motion during chemoradiotherapy of rectal cancer using cone-beam computed tomography.

    LENUS (Irish Health Repository)

    Chong, Irene

    2011-11-15

    There has been no previously published data related to the quantification of rectal motion using cone-beam computed tomography (CBCT) during standard conformal long-course chemoradiotherapy. The purpose of the present study was to quantify the interfractional changes in rectal movement and dimensions and rectal and bladder volume using CBCT and to quantify the bony anatomy displacements to calculate the margins required to account for systematic (Σ) and random (σ) setup errors.

  7. Bayesian Statistics and Uncertainty Quantification for Safety Boundary Analysis in Complex Systems

    Science.gov (United States)

    He, Yuning; Davies, Misty Dawn

    2014-01-01

    The analysis of a safety-critical system often requires detailed knowledge of safe regions and their highdimensional non-linear boundaries. We present a statistical approach to iteratively detect and characterize the boundaries, which are provided as parameterized shape candidates. Using methods from uncertainty quantification and active learning, we incrementally construct a statistical model from only few simulation runs and obtain statistically sound estimates of the shape parameters for safety boundaries.

  8. Plasmonic nanosensors for simultaneous quantification of multiple protein-protein binding affinities.

    Science.gov (United States)

    Ahijado-Guzmán, Rubén; Prasad, Janak; Rosman, Christina; Henkel, Andreas; Tome, Lydia; Schneider, Dirk; Rivas, Germán; Sönnichsen, Carsten

    2014-10-08

    Most of current techniques used for the quantification of protein-protein interactions require the analysis of one pair of binding partners at a time. Herein we present a label-free, simple, fast, and cost-effective route to characterize binding affinities between multiple macromolecular partners simultaneously, using optical dark-field spectroscopy and individual protein-functionalized gold nanorods as sensing elements. Our NanoSPR method could easily become a simple and standard tool in biological, biochemical, and medical laboratories.

  9. BiofilmQuant: A Computer-Assisted Tool for Dental Biofilm Quantification

    OpenAIRE

    Mansoor, Awais; Patsekin, Valery; Scherl, Dale; Robinson, J. Paul; Rajwa, Bartlomiej

    2014-01-01

    Dental biofilm is the deposition of microbial material over a tooth substratum. Several methods have recently been reported in the literature for biofilm quantification; however, at best they provide a barely automated solution requiring significant input needed from the human expert. On the contrary, state-of-the-art automatic biofilm methods fail to make their way into clinical practice because of the lack of effective mechanism to incorporate human input to handle praxis or misclassified r...

  10. MPLS for metropolitan area networks

    CERN Document Server

    Tan, Nam-Kee

    2004-01-01

    METROPOLITAN AREA NETWORKS AND MPLSRequirements of Metropolitan Area Network ServicesMetropolitan Area Network OverviewThe Bandwidth DemandThe Metro Service Provider's Business ApproachesThe Emerging Metro Customer Expectations and NeedsSome Prevailing Metro Service OpportunitiesService Aspects and RequirementsRoles of MPLS in Metropolitan Area NetworksMPLS PrimerMPLS ApplicationsTRAFFIC ENGINEERING ASPECTS OF METROPOLITAN AREA NETWORKSTraffic Engineering ConceptsNetwork CongestionHyper Aggregation ProblemEasing CongestionNetwork ControlTactical versus Strategic Traffic EngineeringIP/ATM Overl

  11. Evaluation of Network Failure induced IPTV degradation in Metro Networks

    DEFF Research Database (Denmark)

    Wessing, Henrik; Berger, Michael Stübert; Yu, Hao

    2009-01-01

    In this paper, we evaluate future network services and classify them according to their network requirements. IPTV is used as candidate service to evaluate the performance of Carrier Ethernet OAM update mechanisms and requirements. The latter is done through quality measurements using MDI...... and subjective evaluations. It is concluded that OAM interval close to 10 ms is a suitable choice for service providers....

  12. The network researchers' network

    DEFF Research Database (Denmark)

    Henneberg, Stephan C.; Jiang, Zhizhong; Naudé, Peter

    2009-01-01

    The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987). In thi......The Industrial Marketing and Purchasing (IMP) Group is a network of academic researchers working in the area of business-to-business marketing. The group meets every year to discuss and exchange ideas, with a conference having been held every year since 1984 (there was no meeting in 1987......). In this paper, based upon the papers presented at the 22 conferences held to date, we undertake a Social Network Analysis in order to examine the degree of co-publishing that has taken place between this group of researchers. We identify the different components in this database, and examine the large main...

  13. Reading and comparative quantification of perfusion myocardium tomo-scintigraphy realised by gamma camera and semiconductors camera;Interpretation et quantification comparative de la tomoscintigraphie myocardique de perfusion realisee par gamma camera et camera a semi-conducteur

    Energy Technology Data Exchange (ETDEWEB)

    Merlin, C.; Gauthe, M.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F. [CLCC Jean-Perrin, Service de medecine nucleaire, 63 - Clermont-Ferrand (France); Motreff, P. [CHRU Gabriel Montpied, Service de Cardiologie, 63 - Clermont-Ferrand (France)

    2010-05-15

    By offering high quality images, semiconductor cameras represent an undeniable technological progress. The interpretation of examinations, however, requires a learning phase. The optimization of quantification software should confirm the superiority of the D-SPECT for the measurement of kinetic parameters. (N.C.)

  14. Using Social Network Measures in Wildlife Disease Ecology, Epidemiology, and Management

    OpenAIRE

    Silk, Matthew J; Darren P Croft; Delahay, Richard J.; Hodgson, David J.; Boots, Mike; Weber, Nicola; McDonald, Robbie A.

    2017-01-01

    Abstract Contact networks, behavioral interactions, and shared use of space can all have important implications for the spread of disease in animals. Social networks enable the quantification of complex patterns of interactions; therefore, network analysis is becoming increasingly widespread in the study of infectious disease in animals, including wildlife. We present an introductory guide to using social-network-analytical approaches in wildlife disease ecology, epidemiology, and management....

  15. Protection of electricity distribution networks

    CERN Document Server

    Gers, Juan M

    2004-01-01

    Written by two practicing electrical engineers, this second edition of the bestselling Protection of Electricity Distribution Networks offers both practical and theoretical coverage of the technologies, from the classical electromechanical relays to the new numerical types, which protect equipment on networks and in electrical plants. A properly coordinated protection system is vital to ensure that an electricity distribution network can operate within preset requirements for safety for individual items of equipment, staff and public, and the network overall. Suitable and reliable equipment sh

  16. Aeroelastic Uncertainty Quantification Studies Using the S4T Wind Tunnel Model

    Science.gov (United States)

    Nikbay, Melike; Heeg, Jennifer

    2017-01-01

    This paper originates from the joint efforts of an aeroelastic study team in the Applied Vehicle Technology Panel from NATO Science and Technology Organization, with the Task Group number AVT-191, titled "Application of Sensitivity Analysis and Uncertainty Quantification to Military Vehicle Design." We present aeroelastic uncertainty quantification studies using the SemiSpan Supersonic Transport wind tunnel model at the NASA Langley Research Center. The aeroelastic study team decided treat both structural and aerodynamic input parameters as uncertain and represent them as samples drawn from statistical distributions, propagating them through aeroelastic analysis frameworks. Uncertainty quantification processes require many function evaluations to asses the impact of variations in numerous parameters on the vehicle characteristics, rapidly increasing the computational time requirement relative to that required to assess a system deterministically. The increased computational time is particularly prohibitive if high-fidelity analyses are employed. As a remedy, the Istanbul Technical University team employed an Euler solver in an aeroelastic analysis framework, and implemented reduced order modeling with Polynomial Chaos Expansion and Proper Orthogonal Decomposition to perform the uncertainty propagation. The NASA team chose to reduce the prohibitive computational time by employing linear solution processes. The NASA team also focused on determining input sample distributions.

  17. Organization of complex networks

    Science.gov (United States)

    Kitsak, Maksim

    Many large complex systems can be successfully analyzed using the language of graphs and networks. Interactions between the objects in a network are treated as links connecting nodes. This approach to understanding the structure of networks is an important step toward understanding the way corresponding complex systems function. Using the tools of statistical physics, we analyze the structure of networks as they are found in complex systems such as the Internet, the World Wide Web, and numerous industrial and social networks. In the first chapter we apply the concept of self-similarity to the study of transport properties in complex networks. Self-similar or fractal networks, unlike non-fractal networks, exhibit similarity on a range of scales. We find that these fractal networks have transport properties that differ from those of non-fractal networks. In non-fractal networks, transport flows primarily through the hubs. In fractal networks, the self-similar structure requires any transport to also flow through nodes that have only a few connections. We also study, in models and in real networks, the crossover from fractal to non-fractal networks that occurs when a small number of random interactions are added by means of scaling techniques. In the second chapter we use k-core techniques to study dynamic processes in networks. The k-core of a network is the network's largest component that, within itself, exhibits all nodes with at least k connections. We use this k-core analysis to estimate the relative leadership positions of firms in the Life Science (LS) and Information and Communication Technology (ICT) sectors of industry. We study the differences in the k-core structure between the LS and the ICT sectors. We find that the lead segment (highest k-core) of the LS sector, unlike that of the ICT sector, is remarkably stable over time: once a particular firm enters the lead segment, it is likely to remain there for many years. In the third chapter we study how

  18. The application study on building materials with computer color quantification system

    Science.gov (United States)

    Li, Zhendong; Yu, Haiye; Li, Hongnan; Zhao, Hongxia

    2006-01-01

    The first impression of any building to a person is its exterior and decoration, and therefore the quality of decoration project shows the more important position in building project. A lot of projects produce quality problem because of the material color difference, which exists universally at the common project, and is often found at the high-grade decoration; therefore, how to grasp and control the color change of building materials, and carry out color quantification, it has the very important meaning. According to the color theory, a computer vision system used in color quantification measurement is established, the standard illuminant A is selected as the light source. In order to realize the standardization of color evaluation, the mutual conversion between RGB and XYZ color space is studied, which is realized by the BP network. According to the colorimetry theory, the computer program is compiled in order to establish the software system, and realize the color quantitative appraisement in whole color gamut. LCH model is used at quantifying the color of building materials, and L *a *b * model is used at comparing the color change. If the wooden floor is selected and laid improperly during family fitment, it is easy to present "flower face". The color also arises greater discrepancy using the laths of same tree. We can give the laying scheme using the color quantification system; at the same time, the color difference problem laying stone materials is also studied in this paper, and the solution scheme has been given using this system.

  19. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  20. Accurate GM atrophy quantification in MS using lesion-filling with co-registered 2D lesion masks.

    NARCIS (Netherlands)

    Popescu, V.; Ran, N.C.G.; Barkhof, F.; Chard, D.T.; Wheeler-Kingshott, C.A.M.; Vrenken, H.

    2014-01-01

    Background In multiple sclerosis (MS), brain atrophy quantification is affected by white matter lesions. LEAP and FSL-lesion-filling, replace lesion voxels with white matter intensities; however, they require precise lesion identification on 3DT1-images. Aim To determine whether 2DT2 lesion masks

  1. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...

  2. Aerodynamic Modeling with Heterogeneous Data Assimilation and Uncertainty Quantification Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Clear Science Corp. proposes to develop an aerodynamic modeling tool that assimilates data from different sources and facilitates uncertainty quantification. The...

  3. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...

  4. Network cosmology.

    Science.gov (United States)

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology.

  5. Network Cosmology

    Science.gov (United States)

    Krioukov, Dmitri; Kitsak, Maksim; Sinkovits, Robert S.; Rideout, David; Meyer, David; Boguñá, Marián

    2012-01-01

    Prediction and control of the dynamics of complex networks is a central problem in network science. Structural and dynamical similarities of different real networks suggest that some universal laws might accurately describe the dynamics of these networks, albeit the nature and common origin of such laws remain elusive. Here we show that the causal network representing the large-scale structure of spacetime in our accelerating universe is a power-law graph with strong clustering, similar to many complex networks such as the Internet, social, or biological networks. We prove that this structural similarity is a consequence of the asymptotic equivalence between the large-scale growth dynamics of complex networks and causal networks. This equivalence suggests that unexpectedly similar laws govern the dynamics of complex networks and spacetime in the universe, with implications to network science and cosmology. PMID:23162688

  6. Evaluation of Perfusion Quantification Methods with Ultrasound Contrast Agents in a Machine-Perfused Pig Liver.

    Science.gov (United States)

    Averkiou, Michalakis; Keravnou, Christina P; Izamis, Maria Louisa; Leen, Edward

    2018-02-01

     To evaluate dynamic contrast-enhanced ultrasound (DCEUS) as a tool for measuring blood flow in the macro- and microcirculation of an ex-vivo machine-perfused pig liver and to confirm the ability of DCEUS to accurately detect induced flow rate changes so that it could then be used clinically for monitoring flow changes in liver tumors.  Bolus injections of contrast agents in the hepatic artery (HA) and portal vein (PV) were administered to 3 machine-perfused pig livers. Flow changes were induced by the pump of the machine perfusion system. The induced flow rates were of clinical relevance (150 - 400 ml/min for HA and 400 - 1400 ml/min for PV). Quantification parameters from time-intensity curves [rise time (RT), mean transit time (MTT), area under the curve (AUC) and peak intensity (PI)] were extracted in order to evaluate whether the induced flow changes were reflected in these parameters.  A linear relationship between the image intensity and the microbubble concentration was confirmed first, while time parameters (RT and MMT) were found to be independent of concentration. The induced flow changes which propagated from the larger vessels to the parenchyma were reflected in the quantification parameters. Specifically, RT, MTT and AUC correlated with flow rate changes.  Machine-perfused pig liver is an excellent test bed for DCEUS quantification approaches for the study of the hepatic vascular networks. DCEUS quantification parameters (RT, MTT, and AUC) can measure relative flow changes of about 20 % and above in the liver vasculature. DCEUS quantification is a promising tool for real-time monitoring of the vascular network of tumors. © Georg Thieme Verlag KG Stuttgart · New York.

  7. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Future Network Architectures

    DEFF Research Database (Denmark)

    Wessing, Henrik; Bozorgebrahimi, Kurosh; Belter, Bartosz

    2015-01-01

    This study identifies key requirements for NRENs towards future network architectures that become apparent as users become more mobile and have increased expectations in terms of availability of data. In addition, cost saving requirements call for federated use of, in particular, the optical...

  9. Surface Enhanced Raman Spectroscopy (SERS) methods for endpoint and real-time quantification of miRNA assays

    Science.gov (United States)

    Restaino, Stephen M.; White, Ian M.

    2017-03-01

    Surface Enhanced Raman spectroscopy (SERS) provides significant improvements over conventional methods for single and multianalyte quantification. Specifically, the spectroscopic fingerprint provided by Raman scattering allows for a direct multiplexing potential far beyond that of fluorescence and colorimetry. Additionally, SERS generates a comparatively low financial and spatial footprint compared with common fluorescence based systems. Despite the advantages of SERS, it has remained largely an academic pursuit. In the field of biosensing, techniques to apply SERS to molecular diagnostics are constantly under development but, most often, assay protocols are redesigned around the use of SERS as a quantification method and ultimately complicate existing protocols. Our group has sought to rethink common SERS methodologies in order to produce translational technologies capable of allowing SERS to compete in the evolving, yet often inflexible biosensing field. This work will discuss the development of two techniques for quantification of microRNA, a promising biomarker for homeostatic and disease conditions ranging from cancer to HIV. First, an inkjet-printed paper SERS sensor has been developed to allow on-demand production of a customizable and multiplexable single-step lateral flow assay for miRNA quantification. Second, as miRNA concentrations commonly exist in relatively low concentrations, amplification methods (e.g. PCR) are therefore required to facilitate quantification. This work presents a novel miRNA assay alongside a novel technique for quantification of nuclease driven nucleic acid amplification strategies that will allow SERS to be used directly with common amplification strategies for quantification of miRNA and other nucleic acid biomarkers.

  10. Adjoint-Based Uncertainty Quantification with MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)

    2011-09-01

    This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.

  11. Tutorial examples for uncertainty quantification methods.

    Energy Technology Data Exchange (ETDEWEB)

    De Bord, Sarah [Univ. of California, Davis, CA (United States)

    2015-08-01

    This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.

  12. Uncertainty quantification and stochastic modeling with Matlab

    CERN Document Server

    Souza de Cursi, Eduardo

    2015-01-01

    Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no

  13. The Modest, or Quantificational, Account of Truth

    Directory of Open Access Journals (Sweden)

    Wolfgang Künne

    2008-12-01

    Full Text Available Truth is a stable, epistemically unconstrained property of propositions, and the concept of truth admits of a non-reductive explanation: that, in a nutshell, is the view for which I argued in Conceptions of Truth. In this paper I try to explain that explanation in a more detailed and, hopefully, more perspicuous way than I did in Ch. 6.2 of the book and to defend its use of sentential quantification against some of the criticisms it has has come in for.

  14. Computer-assisted quantification of CD3+ T cells in follicular lymphoma.

    Science.gov (United States)

    Abas, Fazly S; Shana'ah, Arwa; Christian, Beth; Hasserjian, Robert; Louissaint, Abner; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Niazi, Muhammad Khalid Khan; Lozanski, Gerard; Gurcan, Metin

    2017-06-01

    The advance of high resolution digital scans of pathology slides allowed development of computer based image analysis algorithms that may help pathologists in IHC stains quantification. While very promising, these methods require further refinement before they are implemented in routine clinical setting. Particularly critical is to evaluate algorithm performance in a setting similar to current clinical practice. In this article, we present a pilot study that evaluates the use of a computerized cell quantification method in the clinical estimation of CD3 positive (CD3+) T cells in follicular lymphoma (FL). Our goal is to demonstrate the degree to which computerized quantification is comparable to the practice of estimation by a panel of expert pathologists. The computerized quantification method uses entropy based histogram thresholding to separate brown (CD3+) and blue (CD3-) regions after a color space transformation. A panel of four board-certified hematopathologists evaluated a database of 20 FL images using two different reading methods: visual estimation and manual marking of each CD3+ cell in the images. These image data and the readings provided a reference standard and the range of variability among readers. Sensitivity and specificity measures of the computer's segmentation of CD3+ and CD- T cell are recorded. For all four pathologists, mean sensitivity and specificity measures are 90.97 and 88.38%, respectively. The computerized quantification method agrees more with the manual cell marking as compared to the visual estimations. Statistical comparison between the computerized quantification method and the pathologist readings demonstrated good agreement with correlation coefficient values of 0.81 and 0.96 in terms of Lin's concordance correlation and Spearman's correlation coefficient, respectively. These values are higher than most of those calculated among the pathologists. In the future, the computerized quantification method may be used to investigate

  15. Network Medicine: A Network-based Approach to Human Diseases

    Science.gov (United States)

    Ghiassian, Susan Dina

    With the availability of large-scale data, it is now possible to systematically study the underlying interaction maps of many complex systems in multiple disciplines. Statistical physics has a long and successful history in modeling and characterizing systems with a large number of interacting individuals. Indeed, numerous approaches that were first developed in the context of statistical physics, such as the notion of random walks and diffusion processes, have been applied successfully to study and characterize complex systems in the context of network science. Based on these tools, network science has made important contributions to our understanding of many real-world, self-organizing systems, for example in computer science, sociology and economics. Biological systems are no exception. Indeed, recent studies reflect the necessity of applying statistical and network-based approaches in order to understand complex biological systems, such as cells. In these approaches, a cell is viewed as a complex network consisting of interactions among cellular components, such as genes and proteins. Given the cellular network as a platform, machinery, functionality and failure of a cell can be studied with network-based approaches, a field known as systems biology. Here, we apply network-based approaches to explore human diseases and their associated genes within the cellular network. This dissertation is divided in three parts: (i) A systematic analysis of the connectivity patterns among disease proteins within the cellular network. The quantification of these patterns inspires the design of an algorithm which predicts a disease-specific subnetwork containing yet unknown disease associated proteins. (ii) We apply the introduced algorithm to explore the common underlying mechanism of many complex diseases. We detect a subnetwork from which inflammatory processes initiate and result in many autoimmune diseases. (iii) The last chapter of this dissertation describes the

  16. Adaptive autonomous Communications Routing Optimizer for Network Efficiency Management Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing network efficiency for NASA's Space Networking resources is a large, complex, distributed problem, requiring substantial collaboration. We propose the...

  17. How women organize social networks different from men.

    Science.gov (United States)

    Szell, Michael; Thurner, Stefan

    2013-01-01

    Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online-game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that females and males manage their social networks in substantially different ways.

  18. Atmospheric inversion for cost effective quantification of city CO2 emissions

    Science.gov (United States)

    Wu, L.; Broquet, G.; Ciais, P.; Bellassen, V.; Vogel, F.; Chevallier, F.; Xueref-Remy, I.; Wang, Y.

    2015-11-01

    larger than the target of 5 %. By extending the network from 10 to 70 stations, the inversion can meet this requirement. As for major sectoral CO2 emissions, the uncertainties in the inverted emissions using 70 stations are reduced significantly over that obtained using 10 stations by 32 % for commercial and residential buildings, by 33 % for road transport and by 18 % for the production of energy by power plants, respectively. With 70 stations, the uncertainties from the inversion become of 15 % 2-sigma annual uncertainty for dispersed building emissions, and 18 % for emissions from road transport and energy production. The inversion performance could be further improved by optimal design of station locations and/or by assimilating additional atmospheric measurements of species that are co-emitted with CO2 by fossil fuel combustion processes with a specific signature from each sector, such as carbon monoxide (CO). Atmospheric inversions based on continuous CO2 measurements from a large number of cheap sensors can thus deliver a valuable quantification tool for the monitoring and/or the verification of city CO2 emissions (baseline) and CO2 emission reductions (commitments).

  19. Differential network analysis with multiply imputed lipidomic data.

    Directory of Open Access Journals (Sweden)

    Maiju Kujala

    Full Text Available The importance of lipids for cell function and health has been widely recognized, e.g., a disorder in the lipid composition of cells has been related to atherosclerosis caused cardiovascular disease (CVD. Lipidomics analyses are characterized by large yet not a huge number of mutually correlated variables measured and their associations to outcomes are potentially of a complex nature. Differential network analysis provides a formal statistical method capable of inferential analysis to examine differences in network structures of the lipids under two biological conditions. It also guides us to identify potential relationships requiring further biological investigation. We provide a recipe to conduct permutation test on association scores resulted from partial least square regression with multiple imputed lipidomic data from the LUdwigshafen RIsk and Cardiovascular Health (LURIC study, particularly paying attention to the left-censored missing values typical for a wide range of data sets in life sciences. Left-censored missing values are low-level concentrations that are known to exist somewhere between zero and a lower limit of quantification. To make full use of the LURIC data with the missing values, we utilize state of the art multiple imputation techniques and propose solutions to the challenges that incomplete data sets bring to differential network analysis. The customized network analysis helps us to understand the complexities of the underlying biological processes by identifying lipids and lipid classes that interact with each other, and by recognizing the most important differentially expressed lipids between two subgroups of coronary artery disease (CAD patients, the patients that had a fatal CVD event and the ones who remained stable during two year follow-up.

  20. Quantification of prebiotics in commercial infant formulas.

    Science.gov (United States)

    Sabater, Carlos; Prodanov, Marin; Olano, Agustín; Corzo, Nieves; Montilla, Antonia

    2016-03-01

    Since breastfeeding is not always possible, infant formulas (IFs) are supplemented with prebiotic oligosaccharides, such as galactooligosaccharides (GOS) and/or fructooligosaccharides (FOS) to exert similar effects to those of the breast milk. Nowadays, a great number of infant formulas enriched with prebiotics are disposal in the market, however there are scarce data about their composition. In this study, the combined use of two chromatographic methods (GC-FID and HPLC-RID) for the quantification of carbohydrates present in commercial infant formulas have been used. According to the results obtained by GC-FID for products containing prebiotics, the content of FOS, GOS and GOS/FOS was in the ranges of 1.6-5.0, 1.7-3.2, and 0.08-0.25/2.3-3.8g/100g of product, respectively. HPLC-RID analysis allowed quantification of maltodextrins with degree of polymerization (DP) up to 19. The methodology proposed here may be used for routine quality control of infant formula and other food ingredients containing prebiotics. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Poisson Plus Quantification for Digital PCR Systems.

    Science.gov (United States)

    Majumdar, Nivedita; Banerjee, Swapnonil; Pallas, Michael; Wessel, Thomas; Hegerich, Patricia

    2017-08-29

    Digital PCR, a state-of-the-art nucleic acid quantification technique, works by spreading the target material across a large number of partitions. The average number of molecules per partition is estimated using Poisson statistics, and then converted into concentration by dividing by partition volume. In this standard approach, identical partition sizing is assumed. Violations of this assumption result in underestimation of target quantity, when using Poisson modeling, especially at higher concentrations. The Poisson-Plus Model accommodates for this underestimation, if statistics of the volume variation are well characterized. The volume variation was measured on the chip array based QuantStudio 3D Digital PCR System using the ROX fluorescence level as a proxy for effective load volume per through-hole. Monte Carlo simulations demonstrate the efficacy of the proposed correction. Empirical measurement of model parameters characterizing the effective load volume on QuantStudio 3D Digital PCR chips is presented. The model was used to analyze digital PCR experiments and showed improved accuracy in quantification. At the higher concentrations, the modeling must take effective fill volume variation into account to produce accurate estimates. The extent of the difference from the standard to the new modeling is positively correlated to the extent of fill volume variation in the effective load of your reactions.

  2. CT quantification of central airway in tracheobronchomalacia

    Energy Technology Data Exchange (ETDEWEB)

    Im, Won Hyeong; Jin, Gong Yong; Han, Young Min; Kim, Eun Young [Dept. of Radiology, Chonbuk National University Hospital, Jeonju (Korea, Republic of)

    2016-05-15

    To know which factors help to diagnose tracheobronchomalacia (TBM) using CT quantification of central airway. From April 2013 to July 2014, 19 patients (68.0 ± 15.0 years; 6 male, 13 female) were diagnosed as TBM on CT. As case-matching, 38 normal subjects (65.5 ± 21.5 years; 6 male, 13 female) were selected. All 57 subjects underwent CT with end-inspiration and end-expiration. Airway parameters of trachea and both main bronchus were assessed using software (VIDA diagnostic). Airway parameters of TBM patients and normal subjects were compared using the Student t-test. In expiration, both wall perimeter and wall thickness in TBM patients were significantly smaller than normal subjects (wall perimeter: trachea, 43.97 mm vs. 49.04 mm, p = 0.020; right main bronchus, 33.52 mm vs. 42.69 mm, p < 0.001; left main bronchus, 26.76 mm vs. 31.88 mm, p = 0.012; wall thickness: trachea, 1.89 mm vs. 2.22 mm, p = 0.017; right main bronchus, 1.64 mm vs. 1.83 mm, p = 0.021; left main bronchus, 1.61 mm vs. 1.75 mm, p = 0.016). Wall thinning and decreased perimeter of central airway of expiration by CT quantification would be a new diagnostic indicators in TBM.

  3. Quantification of complex modular architecture in plants.

    Science.gov (United States)

    Reeb, Catherine; Kaandorp, Jaap; Jansson, Fredrik; Puillandre, Nicolas; Dubuisson, Jean-Yves; Cornette, Raphaël; Jabbour, Florian; Coudert, Yoan; Patiño, Jairo; Flot, Jean-François; Vanderpoorten, Alain

    2018-02-22

    Morphometrics, the assignment of quantities to biological shapes, is a powerful tool to address taxonomic, evolutionary, functional and developmental questions. We propose a novel method for shape quantification of complex modular architecture in thalloid plants, whose extremely reduced morphologies, combined with the lack of a formal framework for thallus description, have long rendered taxonomic and evolutionary studies extremely challenging. Using graph theory, thalli are described as hierarchical series of nodes and edges, allowing for accurate, homologous and repeatable measurements of widths, lengths and angles. The computer program MorphoSnake was developed to extract the skeleton and contours of a thallus and automatically acquire, at each level of organization, width, length, angle and sinuosity measurements. Through the quantification of leaf architecture in Hymenophyllum ferns (Polypodiopsida) and a fully worked example of integrative taxonomy in the taxonomically challenging thalloid liverwort genus Riccardia, we show that MorphoSnake is applicable to all ramified plants. This new possibility of acquiring large numbers of quantitative traits in plants with complex modular architectures opens new perspectives of applications, from the development of rapid species identification tools to evolutionary analyses of adaptive plasticity. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  4. Quantification of ontogenetic allometry in ammonoids.

    Science.gov (United States)

    Korn, Dieter

    2012-01-01

    Ammonoids are well-known objects used for studies on ontogeny and phylogeny, but a quantification of ontogenetic change has not yet been carried out. Their planispirally coiled conchs allow for a study of "longitudinal" ontogenetic data, that is data of ontogenetic trajectories that can be obtained from a single specimen. Therefore, they provide a good model for ontogenetic studies of geometry in other shelled organisms. Using modifications of three cardinal conch dimensions, computer simulations can model artificial conchs. The trajectories of ontogenetic allometry of these simulations can be analyzed in great detail in a theoretical morphospace. A method for the classification of conch ontogeny and quantification of the degree of allometry is proposed. Using high-precision cross-sections, the allometric conch growth of real ammonoids can be documented and compared. The members of the Ammonoidea show a wide variety of allometric growth, ranging from near isometry to monophasic, biphasic, or polyphasic allometry. Selected examples of Palaeozoic and Mesozoic ammonoids are shown with respect to their degree of change during ontogeny of the conch. © 2012 Wiley Periodicals, Inc.

  5. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    Science.gov (United States)

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  6. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon

    2015-01-01

    human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...

  7. Molecular quantification of genes encoding for green-fluorescent proteins

    DEFF Research Database (Denmark)

    Felske, A; Vandieken, V; Pauling, B V

    2003-01-01

    A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification is pro...... PCR strategy is a highly specific and sensitive way to monitor recombinant DNA in environments like the efflux of a biotechnological plant....

  8. Interconnected networks

    CERN Document Server

    2016-01-01

    This volume provides an introduction to and overview of the emerging field of interconnected networks which include multi layer or multiplex networks, as well as networks of networks. Such networks present structural and dynamical features quite different from those observed in isolated networks. The presence of links between different networks or layers of a network typically alters the way such interconnected networks behave – understanding the role of interconnecting links is therefore a crucial step towards a more accurate description of real-world systems. While examples of such dissimilar properties are becoming more abundant – for example regarding diffusion, robustness and competition – the root of such differences remains to be elucidated. Each chapter in this topical collection is self-contained and can be read on its own, thus making it also suitable as reference for experienced researchers wishing to focus on a particular topic.

  9. Self-organizing networks

    DEFF Research Database (Denmark)

    Marchetti, Nicola; Prasad, Neeli R.; Johansson, Johan

    2010-01-01

    In this paper, a general overview of Self-Organizing Networks (SON), and the rationale and state-of-the-art of wireless SON are first presented. The technical and business requirements are then briefly treated, and the research challenges within the field of SON are highlighted. Thereafter......, the relation between SON and Cognitive Networks (CN) is covered. At last, the application of Algorithmic Information Theory (AIT) as a possible theoretical tool to support SON in addressing the growing complexity of networks is discussed....

  10. Connected networks economy

    Energy Technology Data Exchange (ETDEWEB)

    Hamerak, K.

    1981-05-05

    Two foremost requirements for interconnected power systems are: constancy of frequency and constancy of voltage. Because there is a rigid relation between main frequency and the number of revolutions of the synchronous generators, and this leads to active load alterations in the case of frequency changes of the network, there is a need for control systems for power networks. In practice they are designed for automatic operation. For control of the number of revolutions of turbines proportional controllers are the most useful. Autocontrol also has an advantageous influence on frequency stability. Peak loads in connected networks are covered by pumping storage power plants.

  11. Evolving production network structures

    DEFF Research Database (Denmark)

    Grunow, Martin; Gunther, H.O.; Burdenik, H.

    2007-01-01

    When deciding about future production network configurations, the current structures have to be taken into account. Further, core issues such as the maturity of the products and the capacity requirements for test runs and ramp-ups must be incorporated. Our approach is based on optimization...... modelling and assigns products and capacity expansions to production sites under the above constraints. It also considers the production complexity at the individual sites and the flexibility of the network. Our implementation results for a large manufacturing network reveal substantial possible cost...... reductions compared to the traditional manual planning results of our industrial partner....

  12. Uncertainty Quantification of CFD Data Generated for a Model Scramjet Isolator Flowfield

    Science.gov (United States)

    Baurle, R. A.; Axdahl, E. L.

    2017-01-01

    Computational fluid dynamics is now considered to be an indispensable tool for the design and development of scramjet engine components. Unfortunately, the quantification of uncertainties is rarely addressed with anything other than sensitivity studies, so the degree of confidence associated with the numerical results remains exclusively with the subject matter expert that generated them. This practice must be replaced with a formal uncertainty quantification process for computational fluid dynamics to play an expanded role in the system design, development, and flight certification process. Given the limitations of current hypersonic ground test facilities, this expanded role is believed to be a requirement by some in the hypersonics community if scramjet engines are to be given serious consideration as a viable propulsion system. The present effort describes a simple, relatively low cost, nonintrusive approach to uncertainty quantification that includes the basic ingredients required to handle both aleatoric (random) and epistemic (lack of knowledge) sources of uncertainty. The nonintrusive nature of the approach allows the computational fluid dynamicist to perform the uncertainty quantification with the flow solver treated as a "black box". Moreover, a large fraction of the process can be automated, allowing the uncertainty assessment to be readily adapted into the engineering design and development workflow. In the present work, the approach is applied to a model scramjet isolator problem where the desire is to validate turbulence closure models in the presence of uncertainty. In this context, the relevant uncertainty sources are determined and accounted for to allow the analyst to delineate turbulence model-form errors from other sources of uncertainty associated with the simulation of the facility flow.

  13. Survey and Evaluate Uncertainty Quantification Methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Guang; Engel, David W.; Eslinger, Paul W.

    2012-02-01

    The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry and academic institutions that will develop and deploy state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately the widespread deployment to hundreds of power plants. The CCSI Toolset will provide end users in industry with a comprehensive, integrated suite of scientifically validated models with uncertainty quantification, optimization, risk analysis and decision making capabilities. The CCSI Toolset will incorporate commercial and open-source software currently in use by industry and will also develop new software tools as necessary to fill technology gaps identified during execution of the project. The CCSI Toolset will (1) enable promising concepts to be more quickly identified through rapid computational screening of devices and processes; (2) reduce the time to design and troubleshoot new devices and processes; (3) quantify the technical risk in taking technology from laboratory-scale to commercial-scale; and (4) stabilize deployment costs more quickly by replacing some of the physical operational tests with virtual power plant simulations. The goal of CCSI is to deliver a toolset that can simulate the scale-up of a broad set of new carbon capture technologies from laboratory scale to full commercial scale. To provide a framework around which the toolset can be developed and demonstrated, we will focus on three Industrial Challenge Problems (ICPs) related to carbon capture technologies relevant to U.S. pulverized coal (PC) power plants. Post combustion capture by solid sorbents is the technology focus of the initial ICP (referred to as ICP A). The goal of the uncertainty quantification (UQ) task (Task 6) is to provide a set of capabilities to the user community for the quantification of uncertainties associated with the carbon

  14. GNS3 network simulation guide

    CERN Document Server

    Welsh, Chris

    2013-01-01

    GNS3 Network Simulation Guide is an easy-to-follow yet comprehensive guide which is written in a tutorial format helping you grasp all the things you need for accomplishing your certification or simulation goal. If you are a networking professional who wants to learn how to simulate networks using GNS3, this book is ideal for you. The introductory examples within the book only require minimal networking knowledge, but as the book progresses onto more advanced topics, users will require knowledge of TCP/IP and routing.

  15. Network maintenance

    CERN Multimedia

    IT Department

    2009-01-01

    A site wide network maintenance has been scheduled for Saturday 28 February. Most of the network devices of the General Purpose network will be upgraded to a newer software version, in order to improve our network monitoring capabilities. This will result in a series of short (2-5 minutes) random interruptions everywhere on the CERN sites along this day. This upgrade will not affect: the Computer centre itself, building 613, the Technical Network and the LHC experiments dedicated networks at the pits. Should you need more details on this intervention, please contact Netops by phone 74927 or email mailto:Netops@cern.ch. IT/CS Group

  16. Network maintenance

    CERN Multimedia

    GS Department

    2009-01-01

    A site-wide network maintenance operation has been scheduled for Saturday 28 February. Most of the network devices of the general purpose network will be upgraded to a newer software version, in order to improve our network monitoring capabilities. This will result in a series of short (2-5 minutes) random interruptions everywhere on the CERN sites throughout the day. This upgrade will not affect the Computer Centre itself, Building 613, the Technical Network and the LHC experiments, dedicated networks at the pits. For further details of this intervention, please contact Netops by phone 74927 or e-mail mailto:Netops@cern.ch. IT/CS Group

  17. Network Ambivalence

    Directory of Open Access Journals (Sweden)

    Patrick Jagoda

    2015-08-01

    Full Text Available The language of networks now describes everything from the Internet to the economy to terrorist organizations. In distinction to a common view of networks as a universal, originary, or necessary form that promises to explain everything from neural structures to online traffic, this essay emphasizes the contingency of the network imaginary. Network form, in its role as our current cultural dominant, makes scarcely imaginable the possibility of an alternative or an outside uninflected by networks. If so many things and relationships are figured as networks, however, then what is not a network? If a network points towards particular logics and qualities of relation in our historical present, what others might we envision in the future? In  many ways, these questions are unanswerable from within the contemporary moment. Instead of seeking an avant-garde approach (to move beyond networks or opting out of networks (in some cases, to recover elements of pre-networked existence, this essay proposes a third orientation: one of ambivalence that operates as a mode of extreme presence. I propose the concept of "network aesthetics," which can be tracked across artistic media and cultural forms, as a model, style, and pedagogy for approaching interconnection in the twenty-first century. The following essay is excerpted from Network Ambivalence (Forthcoming from University of Chicago Press. 

  18. Quantification of Structure from Medical Images

    DEFF Research Database (Denmark)

    Qazi, Arish Asif

    In this thesis, we present automated methods that quantify information from medical images; information that is intended to assist and enable clinicians gain a better understanding of the underlying pathology. The first part of the thesis presents methods that analyse the articular cartilage......, and information beyond that of traditional morphometric measures. The thesis also proposes a fully automatic and generic statistical framework for identifying biologically interpretable regions of difference (ROD) between two groups of biological objects, attributed by anatomical differences or changes relating...... to pathology, without a priori knowledge about the location, extent, or topology of the ROD. Based on quantifications from both morphometric and textural based imaging markers, our method has identified the most pathological regions in the articular cartilage. The remaining part of the thesis presents methods...

  19. Recurrence quantification analysis theory and best practices

    CERN Document Server

    Jr, Jr; Marwan, Norbert

    2015-01-01

    The analysis of recurrences in dynamical systems by using recurrence plots and their quantification is still an emerging field.  Over the past decades recurrence plots have proven to be valuable data visualization and analysis tools in the theoretical study of complex, time-varying dynamical systems as well as in various applications in biology, neuroscience, kinesiology, psychology, physiology, engineering, physics, geosciences, linguistics, finance, economics, and other disciplines.   This multi-authored book intends to comprehensively introduce and showcase recent advances as well as established best practices concerning both theoretical and practical aspects of recurrence plot based analysis.  Edited and authored by leading researcher in the field, the various chapters address an interdisciplinary readership, ranging from theoretical physicists to application-oriented scientists in all data-providing disciplines.

  20. Uncertainty quantification in wind farm flow models

    DEFF Research Database (Denmark)

    Murcia Leon, Juan Pablo

    uncertainties through a model chain are presented and applied to several wind energy related problems such as: annual energy production estimation, wind turbine power curve estimation, wake model calibration and validation, and estimation of lifetime equivalent fatigue loads on a wind turbine. Statistical...... the uncertainty in the lifetime performance of a wind turbine under realistic inflow conditions. Operational measurements of several large offshore wind farms are used to perform model calibration and validation of several stationary wake models. These results provide a guideline to identify the regions in which......This thesis formulates a framework to perform uncertainty quantification within wind energy. This framework has been applied to some of the most common models used to estimate the annual energy production in the planning stages of a wind energy project. Efficient methods to propagate input...

  1. Multispectral image analysis for algal biomass quantification.

    Science.gov (United States)

    Murphy, Thomas E; Macon, Keith; Berberoglu, Halil

    2013-01-01

    This article reports a novel multispectral image processing technique for rapid, noninvasive quantification of biomass concentration in attached and suspended algae cultures. Monitoring the biomass concentration is critical for efficient production of biofuel feedstocks, food supplements, and bioactive chemicals. Particularly, noninvasive and rapid detection techniques can significantly aid in providing delay-free process control feedback in large-scale cultivation platforms. In this technique, three-band spectral images of Anabaena variabilis cultures were acquired and separated into their red, green, and blue components. A correlation between the magnitude of the green component and the areal biomass concentration was generated. The correlation predicted the biomass concentrations of independently prepared attached and suspended cultures with errors of 7 and 15%, respectively, and the effect of varying lighting conditions and background color were investigated. This method can provide necessary feedback for dilution and harvesting strategies to maximize photosynthetic conversion efficiency in large-scale operation. © 2013 American Institute of Chemical Engineers.

  2. Quantification Methods of Management Skills in Shipping

    Directory of Open Access Journals (Sweden)

    Riana Iren RADU

    2012-04-01

    Full Text Available Romania can not overcome the financial crisis without business growth, without finding opportunities for economic development and without attracting investment into the country. Successful managers find ways to overcome situations of uncertainty. The purpose of this paper is to determine the managerial skills developed by the Romanian fluvial shipping company NAVROM (hereinafter CNFR NAVROM SA, compared with ten other major competitors in the same domain, using financial information of these companies during the years 2005-2010. For carrying out the work it will be used quantification methods of managerial skills to CNFR NAVROM SA Galati, Romania, as example mentioning the analysis of financial performance management based on profitability ratios, net profit margin, suppliers management, turnover.

  3. Uncertainty quantification of acoustic emission filtering techniques

    Science.gov (United States)

    Zárate, Boris A.; Caicedo, Juan M.; Ziehl, Paul

    2012-04-01

    This paper compares six different filtering protocols used in Acoustic Emission (AE) monitoring of fatigue crack growth. The filtering protocols are combination of three different filtering techniques which are based on Swansong-like filters and load filters. The filters are compared deterministically and probabilistically. The deterministic comparison is based on the coefficient of determination of the resulting AE data, while the probabilistic comparison is based on the quantification of the uncertainty of the different filtering protocols. The uncertainty of the filtering protocols is quantified by calculating the entropy of the probability distribution of some AE and fracture mechanics parameters for the given filtering protocol. The methodology is useful in cases where several filtering protocols are available and there is no reason to choose one over the others. Acoustic Emission data from a compact tension specimen tested under cyclic load is used for the comparison.

  4. Towards objective quantification of the Tinetti test.

    Science.gov (United States)

    Panella, Lorenzo; Lombardi, Remo; Buizza, Angelo; Gandolfi, Roberto; Pizzagalli, Paola

    2002-01-01

    The Tinetti test is a widespread test for assessing motor control in the elderly, which could also be usefully applied in neurology. At present it uses a qualitative measurement scale. As a first step towards its objective quantification, trunk inclination was measured during the test by two inclinometers and quantified by descriptive parameters. The 95th or 5th percentiles of parameter distributions in normal subjects (no.=150) were taken as limits of normality, and parameters computed on 130 institutionalised elderly people were compared to these limits, to test the parameters' discriminatory power. The distributions of many parameters were statistically different in normal subjects and patients. These results suggest that this approach is a promising tool for objective evaluation of the Tinetti test.

  5. Recurrence quantification analysis of global stock markets

    Science.gov (United States)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  6. Security Shift in Future Network Architectures

    NARCIS (Netherlands)

    Hartog, T.; Schotanus, H.A.; Verkoelen, C.A.A.

    2010-01-01

    In current practice military communication infrastructures are deployed as stand-alone networked information systems. Network-Enabled Capabilities (NEC) and combined military operations lead to new requirements which current communication architectures cannot deliver. This paper informs IT

  7. Hindcasting of storm waves using neural networks

    Digital Repository Service at National Institute of Oceanography (India)

    Rao, S.; Mandal, S.

    of any exogenous input requirement makes the network attractive. A neural network is an information processing system modeled on the structure of the human brain. Its merit is the ability to deal with fuzzy information whose interrelation is ambiguous...

  8. A software tool for network intrusion detection

    CSIR Research Space (South Africa)

    Van der Walt, C

    2012-10-01

    Full Text Available This presentation illustrates how a recently developed software tool enables operators to easily monitor a network and detect intrusions without requiring expert knowledge of network intrusion detections....

  9. Uncertainty Quantification of Equilibrium Climate Sensitivity

    Science.gov (United States)

    Lucas, D. D.; Brandon, S. T.; Covey, C. C.; Domyancic, D. M.; Johannesson, G.; Klein, R.; Tannahill, J.; Zhang, Y.

    2011-12-01

    Significant uncertainties exist in the temperature response of the climate system to changes in the levels of atmospheric carbon dioxide. We report progress to quantify the uncertainties of equilibrium climate sensitivity using perturbed parameter ensembles of the Community Earth System Model (CESM). Through a strategic initiative at the Lawrence Livermore National Laboratory, we have been developing uncertainty quantification (UQ) methods and incorporating them into a software framework called the UQ Pipeline. We have applied this framework to generate a large number of ensemble simulations using Latin Hypercube and other schemes to sample up to three dozen uncertain parameters in the atmospheric (CAM) and sea ice (CICE) model components of CESM. The parameters sampled are related to many highly uncertain processes, including deep and shallow convection, boundary layer turbulence, cloud optical and microphysical properties, and sea ice albedo. An extensive ensemble database comprised of more than 46,000 simulated climate-model-years of recent climate conditions has been assembled. This database is being used to train surrogate models of CESM responses and to perform statistical calibrations of the CAM and CICE models given observational data constraints. The calibrated models serve as a basis for propagating uncertainties forward through climate change simulations using a slab ocean model configuration of CESM. This procedure is being used to quantify the probability density function of equilibrium climate sensitivity accounting for uncertainties in climate model processes. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 and was funded by the Uncertainty Quantification Strategic Initiative Laboratory Directed Research and Development Project at LLNL under project tracking code 10-SI-013. (LLNL-ABS-491765)

  10. Convex geometry of quantum resource quantification

    Science.gov (United States)

    Regula, Bartosz

    2018-01-01

    We introduce a framework unifying the mathematical characterisation of different measures of general quantum resources and allowing for a systematic way to define a variety of faithful quantifiers for any given convex quantum resource theory. The approach allows us to describe many commonly used measures such as matrix norm-based quantifiers, robustness measures, convex roof-based measures, and witness-based quantifiers together in a common formalism based on the convex geometry of the underlying sets of resource-free states. We establish easily verifiable criteria for a measure to possess desirable properties such as faithfulness and strong monotonicity under relevant free operations, and show that many quantifiers obtained in this framework indeed satisfy them for any considered quantum resource. We derive various bounds and relations between the measures, generalising and providing significantly simplified proofs of results found in the resource theories of quantum entanglement and coherence. We also prove that the quantification of resources in this framework simplifies for pure states, allowing us to obtain more easily computable forms of the considered measures, and show that many of them are in fact equal on pure states. Further, we investigate the dual formulation of resource quantifiers, which provide a characterisation of the sets of resource witnesses. We present an explicit application of the results to the resource theories of multi-level coherence, entanglement of Schmidt number k, multipartite entanglement, as well as magic states, providing insight into the quantification of the four resources by establishing novel quantitative relations and introducing new quantifiers, such as a measure of entanglement of Schmidt number k which generalises the convex roof–extended negativity, a measure of k-coherence which generalises the \

  11. StakeMeter: Value-Based Stakeholder Identification and Quantification Framework for Value-Based Software Systems

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N. A.; Zaheer, Kashif Bin

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called ‘StakeMeter’. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error. PMID:25799490

  12. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Directory of Open Access Journals (Sweden)

    Muhammad Imran Babar

    Full Text Available Value-based requirements engineering plays a vital role in the development of value-based software (VBS. Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  13. StakeMeter: value-based stakeholder identification and quantification framework for value-based software systems.

    Science.gov (United States)

    Babar, Muhammad Imran; Ghazali, Masitah; Jawawi, Dayang N A; Bin Zaheer, Kashif

    2015-01-01

    Value-based requirements engineering plays a vital role in the development of value-based software (VBS). Stakeholders are the key players in the requirements engineering process, and the selection of critical stakeholders for the VBS systems is highly desirable. Based on the stakeholder requirements, the innovative or value-based idea is realized. The quality of the VBS system is associated with the concrete set of valuable requirements, and the valuable requirements can only be obtained if all the relevant valuable stakeholders participate in the requirements elicitation phase. The existing value-based approaches focus on the design of the VBS systems. However, the focus on the valuable stakeholders and requirements is inadequate. The current stakeholder identification and quantification (SIQ) approaches are neither state-of-the-art nor systematic for the VBS systems. The existing approaches are time-consuming, complex and inconsistent which makes the initiation process difficult. Moreover, the main motivation of this research is that the existing SIQ approaches do not provide the low level implementation details for SIQ initiation and stakeholder metrics for quantification. Hence, keeping in view the existing SIQ problems, this research contributes in the form of a new SIQ framework called 'StakeMeter'. The StakeMeter framework is verified and validated through case studies. The proposed framework provides low-level implementation guidelines, attributes, metrics, quantification criteria and application procedure as compared to the other methods. The proposed framework solves the issues of stakeholder quantification or prioritization, higher time consumption, complexity, and process initiation. The framework helps in the selection of highly critical stakeholders for the VBS systems with less judgmental error.

  14. Network neuroscience.

    Science.gov (United States)

    Bassett, Danielle S; Sporns, Olaf

    2017-02-23

    Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system.

  15. Organizational Networks

    DEFF Research Database (Denmark)

    Grande, Bård; Sørensen, Ole Henning

    1998-01-01

    The paper focuses on the concept of organizational networks. Four different uses of the concept of organizational network are identified and critically discussed. Special focus is placed on how information and communication technologies as communication mediators and cognitive pictures influence...

  16. Network workshop

    DEFF Research Database (Denmark)

    Bruun, Jesper; Evans, Robert Harry

    2014-01-01

    This paper describes the background for, realisation of and author reflections on a network workshop held at ESERA2013. As a new research area in science education, networks offer a unique opportunity to visualise and find patterns and relationships in complicated social or academic network data...... research community. With this workshop, participants were offered a way into network science based on authentic educational research data. The workshop was constructed as an inquiry lesson with emphasis on user autonomy. Learning activities had participants choose to work with one of two cases of networks...... network methodology in one’s research might supersede the perceived benefits of doing so. As a response to that problem, we argue that workshops can act as a road towards meaningful engagement with networks and highlight that network methodology promises new ways of interpreting data to answer questions...

  17. How women organize social networks different from men

    CERN Document Server

    Szell, Michael

    2013-01-01

    Superpositions of social networks, such as communication, friendship, or trade networks, are called multiplex networks, forming the structural backbone of human societies. Novel datasets now allow quantification and exploration of multiplex networks. Here we study gender-specific differences of a multiplex network from a complete behavioral dataset of an online game society of about 300,000 players. On the individual level females perform better economically and are less risk-taking than males. Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females. On the network level females have more communication partners, who themselves are less connected than partners of males. We find a strong homophily effect for females and higher clustering coefficients of females in trade and attack networks. Cooperative links between males are under-represented, reflecting competition for resources among males. These results confirm quantitatively that fema...

  18. Software requirements

    CERN Document Server

    Wiegers, Karl E

    2003-01-01

    Without formal, verifiable software requirements-and an effective system for managing them-the programs that developers think they've agreed to build often will not be the same products their customers are expecting. In SOFTWARE REQUIREMENTS, Second Edition, requirements engineering authority Karl Wiegers amplifies the best practices presented in his original award-winning text?now a mainstay for anyone participating in the software development process. In this book, you'll discover effective techniques for managing the requirements engineering process all the way through the development cy

  19. Social Networks

    OpenAIRE

    Martí, Joan; Zenou, Yves

    2009-01-01

    We survey the literature on social networks by putting together the economics, sociological and physics/applied mathematics approaches, showing their similarities and differences. We expose, in particular, the two main ways of modeling network formation. While the physics/applied mathematics approach is capable of reproducing most observed networks, it does not explain why they emerge. On the contrary, the economics approach is very precise in explaining why networks emerge but does a poor jo...

  20. RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome

    Directory of Open Access Journals (Sweden)

    Dewey Colin N

    2011-08-01

    Full Text Available Abstract Background RNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments. Results We present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene. Conclusions RSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost

  1. A User Driven Dynamic Circuit Network Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Guok, Chin; Robertson, David; Chaniotakis, Evangelos; Thompson, Mary; Johnston, William; Tierney, Brian

    2008-10-01

    The requirements for network predictability are becoming increasingly critical to the DoE science community where resources are widely distributed and collaborations are world-wide. To accommodate these emerging requirements, the Energy Sciences Network has established a Science Data Network to provide user driven guaranteed bandwidth allocations. In this paper we outline the design, implementation, and secure coordinated use of such a network, as well as some lessons learned.

  2. Network Security Validation Using Game Theory

    Science.gov (United States)

    Papadopoulou, Vicky; Gregoriades, Andreas

    Non-functional requirements (NFR) such as network security recently gained widespread attention in distributed information systems. Despite their importance however, there is no systematic approach to validate these requirements given the complexity and uncertainty characterizing modern networks. Traditionally, network security requirements specification has been the results of a reactive process. This however, limited the immunity property of the distributed systems that depended on these networks. Security requirements specification need a proactive approach. Networks' infrastructure is constantly under attack by hackers and malicious software that aim to break into computers. To combat these threats, network designers need sophisticated security validation techniques that will guarantee the minimum level of security for their future networks. This paper presents a game-theoretic approach to security requirements validation. An introduction to game theory is presented along with an example that demonstrates the application of the approach.

  3. A Security Architecture for Health Information Networks

    OpenAIRE

    Kailar, Rajashekar

    2007-01-01

    Health information network security needs to balance exacting security controls with practicality, and ease of implementation in today’s healthcare enterprise. Recent work on ‘nationwide health information network’ architectures has sought to share highly confidential data over insecure networks such as the Internet. Using basic patterns of health network data flow and trust models to support secure communication between network nodes, we abstract network security requirements to a core set t...

  4. Methodological considerations in quantification of oncological FDG PET studies.

    Science.gov (United States)

    Vriens, Dennis; Visser, Eric P; de Geus-Oei, Lioe-Fee; Oyen, Wim J G

    2010-07-01

    This review aims to provide insight into the factors that influence quantification of glucose metabolism by FDG PET images in oncology as well as their influence on repeated measures studies (i.e. treatment response assessment), offering improved understanding both for clinical practice and research. Structural PubMed searches have been performed for the many factors affecting quantification of glucose metabolism by FDG PET. Review articles and references lists have been used to supplement the search findings. Biological factors such as fasting blood glucose level, FDG uptake period, FDG distribution and clearance, patient motion (breathing) and patient discomfort (stress) all influence quantification. Acquisition parameters should be adjusted to maximize the signal to noise ratio without exposing the patient to a higher than strictly necessary radiation dose. This is especially challenging in pharmacokinetic analysis, where the temporal resolution is of significant importance. The literature is reviewed on the influence of attenuation correction on parameters for glucose metabolism, the effect of motion, metal artefacts and contrast agents on quantification of CT attenuation-corrected images. Reconstruction settings (analytical versus iterative reconstruction, post-reconstruction filtering and image matrix size) all potentially influence quantification due to artefacts, noise levels and lesion size dependency. Many region of interest definitions are available, but increased complexity does not necessarily result in improved performance. Different methods for the quantification of the tissue of interest can introduce systematic and random inaccuracy. This review provides an up-to-date overview of the many factors that influence quantification of glucose metabolism by FDG PET.

  5. Network Coding

    Indian Academy of Sciences (India)

    Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.

  6. Energy requirements

    NARCIS (Netherlands)

    Hulzebos, Christian V.; Sauer, Pieter J. J.

    The determination of the appropriate energy and nutritional requirements of a newborn infant requires a clear goal of the energy and other compounds to be administered, valid methods to measure energy balance and body composition, and knowledge of the neonatal metabolic capacities. Providing an

  7. Motion corrected LV quantification based on 3D modelling for improved functional assessment in cardiac MRI.

    Science.gov (United States)

    Liew, Y M; McLaughlin, R A; Chan, B T; Abdul Aziz, Y F; Chee, K H; Ung, N M; Tan, L K; Lai, K W; Ng, S; Lim, E

    2015-04-07

    Cine MRI is a clinical reference standard for the quantitative assessment of cardiac function, but reproducibility is confounded by motion artefacts. We explore the feasibility of a motion corrected 3D left ventricle (LV) quantification method, incorporating multislice image registration into the 3D model reconstruction, to improve reproducibility of 3D LV functional quantification. Multi-breath-hold short-axis and radial long-axis images were acquired from 10 patients and 10 healthy subjects. The proposed framework reduced misalignment between slices to subpixel accuracy (2.88 to 1.21 mm), and improved interstudy reproducibility for 5 important clinical functional measures, i.e. end-diastolic volume, end-systolic volume, ejection fraction, myocardial mass and 3D-sphericity index, as reflected in a reduction in the sample size required to detect statistically significant cardiac changes: a reduction of 21-66%. Our investigation on the optimum registration parameters, including both cardiac time frames and number of long-axis (LA) slices, suggested that a single time frame is adequate for motion correction whereas integrating more LA slices can improve registration and model reconstruction accuracy for improved functional quantification especially on datasets with severe motion artefacts.

  8. Quantification of video-taped images in microcirculation research using inexpensive imaging software (Adobe Photoshop).

    Science.gov (United States)

    Brunner, J; Krummenauer, F; Lehr, H A

    2000-04-01

    Study end-points in microcirculation research are usually video-taped images rather than numeric computer print-outs. Analysis of these video-taped images for the quantification of microcirculatory parameters usually requires computer-based image analysis systems. Most software programs for image analysis are custom-made, expensive, and limited in their applicability to selected parameters and study end-points. We demonstrate herein that an inexpensive, commercially available computer software (Adobe Photoshop), run on a Macintosh G3 computer with inbuilt graphic capture board provides versatile, easy to use tools for the quantification of digitized video images. Using images obtained by intravital fluorescence microscopy from the pre- and postischemic muscle microcirculation in the skinfold chamber model in hamsters, Photoshop allows simple and rapid quantification (i) of microvessel diameters, (ii) of the functional capillary density and (iii) of postischemic leakage of FITC-labeled high molecular weight dextran from postcapillary venules. We present evidence of the technical accuracy of the software tools and of a high degree of interobserver reliability. Inexpensive commercially available imaging programs (i.e., Adobe Photoshop) provide versatile tools for image analysis with a wide range of potential applications in microcirculation research.

  9. Study of the matrix effect on the PIXE quantification of active pharmaceutical ingredients in different formulations

    Science.gov (United States)

    Bejjani, Alice; Noun, Manale; Soueidan, Maher; Della-Negra, Serge; Abi-Fadel, Edmond; Roumie, Mohammad; Nsouli, Bilal

    2017-09-01

    While Particle Induced X-ray Emission technique (PIXE) is an accurate technique to quantify Active Pharmaceutical Ingredients (API's) via the analysis of their heteroatoms, each drug (formulation) may require a specific quantification procedure due to its distinct matrix composition. The commercial Fludinium® drug, which has two active ingredients Clidinium Bromide (C22H26NO3Br) and Dihydrochloride Trifluoperazine (C21H24N3F3S. 2HCl) has been taken as a case study in this work. Different amounts of its API's and its placebo were mixed to provide various formulations. The matrix effect on the quantification of the three heteroatoms (chlorine, sulfur and bromine) related to the above API's in different formulations has been studied. In fact, chlorine which is in its hydrochloride form in the API was not eventually considered for calculation due to its instability under beam. The calculation of bromine amount via its Kα or its Lα rays was found to be independent of the matrix composition and it was rapidly done by simple comparison to an external standard. However, the calculation of sulfur, via its Kα, was highly dependent on the matrix composition. Therefore, to achieve an accurate quantification a more sophisticated calculation method was used by means of the GUPIX code.

  10. Protocol for Quantification of Defects in Natural Fibres for Composites

    Directory of Open Access Journals (Sweden)

    Ulrich Andreas Mortensen

    2014-01-01

    Full Text Available Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition of defect size by width, and it is shown that both definitions can be used to give unbiased findings for the comparison between fibre types. Finally, considerations are given with respect to true measures of defect content, number of determinations, and number of significant figures used for the descriptive statistics.

  11. Quantification of sugars in breakfast cereals using capillary electrophoresis.

    Science.gov (United States)

    Toutounji, Michelle R; Van Leeuwen, Matthew P; Oliver, James D; Shrestha, Ashok K; Castignolles, Patrice; Gaborieau, Marianne

    2015-05-18

    About 80% of the Australian population consumes breakfast cereal (BC) at least five days a week. With high prevalence rates of obesity and other diet-related diseases, improved methods for monitoring sugar levels in breakfast cereals would be useful in nutrition research. The heterogeneity of the complex matrix of BCs can make carbohydrate analysis challenging or necessitate tedious sample preparation leading to potential sugar loss or starch degradation into sugars. A recently established, simple and robust free solution capillary electrophoresis (CE) method was used in a new application to 13 BCs (in Australia) and compared with several established methods for quantification of carbohydrates. Carbohydrates identified in BCs by CE included sucrose, maltose, glucose and fructose. The CE method is simple requiring no sample preparation or derivatization and carbohydrates are detected by direct UV detection. CE was shown to be a more robust and accurate method for measuring carbohydrates than Fehling method, DNS (3,5-dinitrosalicylic acid) assay and HPLC (high performance liquid chromatography). Copyright © 2015. Published by Elsevier Ltd.

  12. Forensic Uncertainty Quantification of Explosive Dispersal of Particles

    Science.gov (United States)

    Hughes, Kyle; Park, Chanyoung; Haftka, Raphael; Kim, Nam-Ho

    2017-06-01

    In addition to the numerical challenges of simulating the explosive dispersal of particles, validation of the simulation is often plagued with poor knowledge of the experimental conditions. The level of experimental detail required for validation is beyond what is usually included in the literature. This presentation proposes the use of forensic uncertainty quantification (UQ) to investigate validation-quality experiments to discover possible sources of uncertainty that may have been missed in initial design of experiments or under-reported. The current experience of the authors has found that by making an analogy to crime scene investigation when looking at validation experiments, valuable insights may be gained. One examines all the data and documentation provided by the validation experimentalists, corroborates evidence, and quantifies large sources of uncertainty a posteriori with empirical measurements. In addition, it is proposed that forensic UQ may benefit from an independent investigator to help remove possible implicit biases and increases the likelihood of discovering unrecognized uncertainty. Forensic UQ concepts will be discussed and then applied to a set of validation experiments performed at Eglin Air Force Base. This work was supported in part by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program.

  13. Microvascular quantification based on contour-scanning photoacoustic microscopy

    Science.gov (United States)

    Yeh, Chenghung; Soetikno, Brian; Hu, Song; Maslov, Konstantin I.; Wang, Lihong V.

    2014-09-01

    Accurate quantification of microvasculature remains of interest in fundamental pathophysiological studies and clinical trials. Current photoacoustic microscopy can noninvasively quantify properties of the microvasculature, including vessel density and diameter, with a high spatial resolution. However, the depth range of focus (i.e., focal zone) of optical-resolution photoacoustic microscopy (OR-PAM) is often insufficient to encompass the depth variations of features of interest-such as blood vessels-due to uneven tissue surfaces. Thus, time-consuming image acquisitions at multiple different focal planes are required to maintain the region of interest in the focal zone. We have developed continuous three-dimensional motorized contour-scanning OR-PAM, which enables real-time adjustment of the focal plane to track the vessels' profile. We have experimentally demonstrated that contour scanning improves the signal-to-noise ratio of conventional OR-PAM by as much as 41% and shortens the image acquisition time by 3.2 times. Moreover, contour-scanning OR-PAM more accurately quantifies vessel density and diameter, and has been applied to studying tumors with uneven surfaces.

  14. Visualization, quantification, and automation of gradient defined features

    Science.gov (United States)

    Fedenczuk, Tom

    New automation, visualization, and quantification methodologies are presented and applied to the rheology and morphology of serpentinite mud volcanoes in the Mariana Forearc. The methods can be used in submarine and subaerial settings to characterize surface features of mud or lava flows, mass-wasting deposits, slump scars, and fault scarps. Three sequential mudflows are characterized for Big Blue Seamount, for which we define surface morphology, distal edges of flows, corrected thickness, underlying slopes, flow directions, and volumes. Results indicate a significant increase (from upper to lowermost flows) in distal edge thickness (13 m, 30 m, 62 m), volume (6.4x10 7 m3, 4.0x108 m3, 6.7x109 m3), and mean yield strength (36 kPa, 80 kPa, 170 kPa). We also calculate the volumes of five mud volcanoes using bathymetric and multichannel seismic data (where available). The application of these volume calculations to estimates of the amount of source protolith (peridotite) required to form the seamounts indicates that the serpentinite must derive from a continually renewed conduit.

  15. Interactive image quantification tools in nuclear material forensics

    Energy Technology Data Exchange (ETDEWEB)

    Porter, Reid B [Los Alamos National Laboratory; Ruggiero, Christy [Los Alamos National Laboratory; Hush, Don [Los Alamos National Laboratory; Harvey, Neal [Los Alamos National Laboratory; Kelly, Pat [Los Alamos National Laboratory; Scoggins, Wayne [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory

    2011-01-03

    Morphological and microstructural features visible in microscopy images of nuclear materials can give information about the processing history of a nuclear material. Extraction of these attributes currently requires a subject matter expert in both microscopy and nuclear material production processes, and is a time consuming, and at least partially manual task, often involving multiple software applications. One of the primary goals of computer vision is to find ways to extract and encode domain knowledge associated with imagery so that parts of this process can be automated. In this paper we describe a user-in-the-loop approach to the problem which attempts to both improve the efficiency of domain experts during image quantification as well as capture their domain knowledge over time. This is accomplished through a sophisticated user-monitoring system that accumulates user-computer interactions as users exploit their imagery. We provide a detailed discussion of the interactive feature extraction and segmentation tools we have developed and describe our initial results in exploiting the recorded user-computer interactions to improve user productivity over time.

  16. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimise the management of the Technical Network (TN), to facilitate understanding of the purpose of devices connected to the TN and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive e-mails from IT/CS asking them to add the corresponding information in the network database at "network-cern-ch". Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  17. Identification of leader and self-organizing communities in complex networks

    OpenAIRE

    Jingcheng Fu; Weixiong Zhang; Jianliang Wu

    2017-01-01

    Community or module structure is a natural property of complex networks. Leader communities and self-organizing communities have been introduced recently to characterize networks and understand how communities arise in complex networks. However, identification of leader and self-organizing communities is technically challenging since no adequate quantification has been developed to properly separate the two types of communities. We introduced a new measure, called ratio of node degree varianc...

  18. Statistical Uncertainty Quantification of Physical Models during Reflood of LBLOCA

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Deog Yeon; Seul, Kwang Won; Woo, Sweng Woong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2015-05-15

    The use of the best-estimate (BE) computer codes in safety analysis for loss-of-coolant accident (LOCA) is the major trend in many countries to reduce the significant conservatism. A key feature of this BE evaluation requires the licensee to quantify the uncertainty of the calculations. So, it is very important how to determine the uncertainty distribution before conducting the uncertainty evaluation. Uncertainty includes those of physical model and correlation, plant operational parameters, and so forth. The quantification process is often performed mainly by subjective expert judgment or obtained from reference documents of computer code. In this respect, more mathematical methods are needed to reasonably determine the uncertainty ranges. The first uncertainty quantification are performed with the various increments for two influential uncertainty parameters to get the calculated responses and their derivatives. The different data set with two influential uncertainty parameters for FEBA tests, are chosen applying more strict criteria for selecting responses and their derivatives, which may be considered as the user’s effect in the CIRCÉ applications. Finally, three influential uncertainty parameters are considered to study the effect on the number of uncertainty parameters due to the limitation of CIRCÉ method. With the determined uncertainty ranges, uncertainty evaluations for FEBA tests are performed to check whether the experimental responses such as the cladding temperature or pressure drop are inside the limits of calculated uncertainty bounds. A confirmation step will be performed to evaluate the quality of the information in the case of the different reflooding PERICLES experiments. The uncertainty ranges of physical model in MARS-KS thermal-hydraulic code during the reflooding were quantified by CIRCÉ method using FEBA experiment tests, instead of expert judgment. Also, through the uncertainty evaluation for FEBA and PERICLES tests, it was confirmed

  19. 3C-digital PCR for quantification of chromatin interactions.

    Science.gov (United States)

    Du, Meijun; Wang, Liang

    2016-12-06

    Chromosome conformation capture (3C) is a powerful and widely used technique for detecting the physical interactions between chromatin regions in vivo. The principle of 3C is to convert physical chromatin interactions into specific DNA ligation products, which are then detected by quantitative polymerase chain reaction (qPCR). However, 3C-qPCR assays are often complicated by the necessity of normalization controls to correct for amplification biases. In addition, qPCR is often limited to a certain cycle number, making it difficult to detect fragment ligations with low frequency. Recently, digital PCR (dPCR) technology has become available, which allows for highly sensitive nucleic acid quantification. Main advantage of dPCR is its high precision of absolute nucleic acid quantification without requirement of normalization controls. To demonstrate the utility of dPCR in quantifying chromatin interactions, we examined two prostate cancer risk loci at 8q24 and 2p11.2 for their interaction target genes MYC and CAPG in LNCaP cell line. We designed anchor and testing primers at known regulatory element fragments and target gene regions, respectively. dPCR results showed that interaction frequency between the regulatory element and MYC gene promoter was 0.7 (95% CI 0.40-1.10) copies per 1000 genome copies while other regions showed relatively low ligation frequencies. The dPCR results also showed that the ligation frequencies between the regulatory element and two EcoRI fragments containing CAPG gene promoter were 1.9 copies (95% CI 1.41-2.47) and 1.3 copies per 1000 genome copies (95% CI 0.76-1.92), respectively, while the interaction signals were reduced on either side of the promoter region of CAPG gene. Additionally, we observed comparable results from 3C-dPCR and 3C-qPCR at 2p11.2 in another cell line (DU145). Compared to traditional 3C-qPCR, our results show that 3C-dPCR is much simpler and more sensitive to detect weak chromatin interactions. It may eliminate

  20. Network science

    CERN Document Server

    Barabasi, Albert-Laszlo

    2016-01-01

    Networks are everywhere, from the Internet, to social networks, and the genetic networks that determine our biological existence. Illustrated throughout in full colour, this pioneering textbook, spanning a wide range of topics from physics to computer science, engineering, economics and the social sciences, introduces network science to an interdisciplinary audience. From the origins of the six degrees of separation to explaining why networks are robust to random failures, the author explores how viruses like Ebola and H1N1 spread, and why it is that our friends have more friends than we do. Using numerous real-world examples, this innovatively designed text includes clear delineation between undergraduate and graduate level material. The mathematical formulas and derivations are included within Advanced Topics sections, enabling use at a range of levels. Extensive online resources, including films and software for network analysis, make this a multifaceted companion for anyone with an interest in network sci...

  1. Exploiting network redundancy for low-cost neural network realizations.

    NARCIS (Netherlands)

    Keegstra, H; Jansen, WJ; Nijhuis, JAG; Spaanenburg, L; Stevens, H; Udding, JT

    1996-01-01

    A method is presented to optimize a trained neural network for physical realization styles. Target architectures are embedded microcontrollers or standard cell based ASIC designs. The approach exploits the redundancy in the network, required for successful training, to replace the synaptic weighting

  2. Decentralized Network-level Synchronization in Mobile Ad Hoc Networks

    NARCIS (Netherlands)

    Voulgaris, Spyros; Dobson, Matthew; van Steen, Martinus Richardus

    Energy is the scarcest resource in ad hoc wireless networks, particularly in wireless sensor networks requiring a long lifetime. Intermittently switching the radio on and off is widely adopted as the most effective way to keep energy consumption low. This, however, prevents the very goal of

  3. Vulnerability of network of networks

    Science.gov (United States)

    Havlin, S.; Kenett, D. Y.; Bashan, A.; Gao, J.; Stanley, H. E.

    2014-10-01

    Our dependence on networks - be they infrastructure, economic, social or others - leaves us prone to crises caused by the vulnerabilities of these networks. There is a great need to develop new methods to protect infrastructure networks and prevent cascade of failures (especially in cases of coupled networks). Terrorist attacks on transportation networks have traumatized modern societies. With a single blast, it has become possible to paralyze airline traffic, electric power supply, ground transportation or Internet communication. How, and at which cost can one restructure the network such that it will become more robust against malicious attacks? The gradual increase in attacks on the networks society depends on - Internet, mobile phone, transportation, air travel, banking, etc. - emphasize the need to develop new strategies to protect and defend these crucial networks of communication and infrastructure networks. One example is the threat of liquid explosives a few years ago, which completely shut down air travel for days, and has created extreme changes in regulations. Such threats and dangers warrant the need for new tools and strategies to defend critical infrastructure. In this paper we review recent advances in the theoretical understanding of the vulnerabilities of interdependent networks with and without spatial embedding, attack strategies and their affect on such networks of networks as well as recently developed strategies to optimize and repair failures caused by such attacks.

  4. Greenhouse Gas Emissions from Waste Management-Assessment of Quantification Methods.

    Science.gov (United States)

    Mohareb, Eugene A; MacLean, Heather L; Kennedy, Christopher A

    2011-05-01

    Of the many sources of urban greenhouse gas (GHG) emissions, solid waste is the only one for which management decisions are undertaken primarily by municipal governments themselves and is hence often the largest component of cities' corporate inventories. It is essential that decision-makers select an appropriate quantification methodology and have an appreciation of methodological strengths and shortcomings. This work compares four different waste emissions quantification methods, including Intergovernmental Panel on Climate Change (IPCC) 1996 guidelines, IPCC 2006 guidelines, U.S. Environmental Protection Agency (EPA) Waste Reduction Model (WARM), and the Federation of Canadian Municipalities- Partners for Climate Protection (FCM-PCP) quantification tool. Waste disposal data for the greater Toronto area (GTA) in 2005 are used for all methodologies; treatment options (including landfill, incineration, compost, and anaerobic digestion) are examined where available in methodologies. Landfill was shown to be the greatest source of GHG emissions, contributing more than three-quarters of total emissions associated with waste management. Results from the different landfill gas (LFG) quantification approaches ranged from an emissions source of 557 kt carbon dioxide equivalents (CO2e) (FCM-PCP) to a carbon sink of -53 kt CO2e (EPA WARM). Similar values were obtained between IPCC approaches. The IPCC 2006 method was found to be more appropriate for inventorying applications because it uses a waste-in-place (WIP) approach, rather than a methane commitment (MC) approach, despite perceived onerous data requirements for WIP. MC approaches were found to be useful from a planning standpoint; however, uncertainty associated with their projections of future parameter values limits their applicability for GHG inventorying. MC and WIP methods provided similar results in this case study; however, this is case specific because of similarity in assumptions of present and future

  5. Greenhouse gas emissions from waste management--assessment of quantification methods.

    Science.gov (United States)

    Mohareb, Eugene A; MacLean, Heather L; Kennedy, Christopher A

    2011-05-01

    Of the many sources of urban greenhouse gas (GHG) emissions, solid waste is the only one for which management decisions are undertaken primarily by municipal governments themselves and is hence often the largest component of cities' corporate inventories. It is essential that decision-makers select an appropriate quantification methodology and have an appreciation of methodological strengths and shortcomings. This work compares four different waste emissions quantification methods, including Intergovernmental Panel on Climate Change (IPCC) 1996 guidelines, IPCC 2006 guidelines, U.S. Environmental Protection Agency (EPA) Waste Reduction Model (WARM), and the Federation of Canadian Municipalities-Partners for Climate Protection (FCM-PCP) quantification tool. Waste disposal data for the greater Toronto area (GTA) in 2005 are used for all methodologies; treatment options (including landfill, incineration, compost, and anaerobic digestion) are examined where available in methodologies. Landfill was shown to be the greatest source of GHG emissions, contributing more than three-quarters of total emissions associated with waste management. Results from the different landfill gas (LFG) quantification approaches ranged from an emissions source of 557 kt carbon dioxide equivalents (CO2e) (FCM-PCP) to a carbon sink of -53 kt CO2e (EPA WARM). Similar values were obtained between IPCC approaches. The IPCC 2006 method was found to be more appropriate for inventorying applications because it uses a waste-in-place (WIP) approach, rather than a methane commitment (MC) approach, despite perceived onerous data requirements for WIP. MC approaches were found to be useful from a planning standpoint; however, uncertainty associated with their projections of future parameter values limits their applicability for GHG inventorying. MC and WIP methods provided similar results in this case study; however, this is case specific because of similarity in assumptions of present and future landfill

  6. Networked Microgrids Scoping Study

    Energy Technology Data Exchange (ETDEWEB)

    Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dobriansky, Larisa [General MicroGrids, San Diego, CA (United States); Glover, Steve [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Liu, Chen-Ching [Washington State Univ., Pullman, WA (United States); Looney, Patrick [Brookhaven National Lab. (BNL), Upton, NY (United States); Mashayekh, Salman [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Pratt, Annabelle [National Renewable Energy Lab. (NREL), Golden, CO (United States); Schneider, Kevin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Stadler, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Starke, Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Jianhui [Argonne National Lab. (ANL), Argonne, IL (United States); Yue, Meng [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2016-12-05

    Much like individual microgrids, the range of opportunities and potential architectures of networked microgrids is very diverse. The goals of this scoping study are to provide an early assessment of research and development needs by examining the benefits of, risks created by, and risks to networked microgrids. At this time there are very few, if any, examples of deployed microgrid networks. In addition, there are very few tools to simulate or otherwise analyze the behavior of networked microgrids. In this setting, it is very difficult to evaluate networked microgrids systematically or quantitatively. At this early stage, this study is relying on inputs, estimations, and literature reviews by subject matter experts who are engaged in individual microgrid research and development projects, i.e., the authors of this study The initial step of the study gathered input about the potential opportunities provided by networked microgrids from these subject matter experts. These opportunities were divided between the subject matter experts for further review. Part 2 of this study is comprised of these reviews. Part 1 of this study is a summary of the benefits and risks identified in the reviews in Part 2 and synthesis of the research needs required to enable networked microgrids.

  7. A quantum access network.

    Science.gov (United States)

    Fröhlich, Bernd; Dynes, James F; Lucamarini, Marco; Sharpe, Andrew W; Yuan, Zhiliang; Shields, Andrew J

    2013-09-05

    The theoretically proven security of quantum key distribution (QKD) could revolutionize the way in which information exchange is protected in the future. Several field tests of QKD have proven it to be a reliable technology for cryptographic key exchange and have demonstrated nodal networks of point-to-point links. However, until now no convincing answer has been given to the question of how to extend the scope of QKD beyond niche applications in dedicated high security networks. Here we introduce and experimentally demonstrate the concept of a 'quantum access network': based on simple and cost-effective telecommunication technologies, the scheme can greatly expand the number of users in quantum networks and therefore vastly broaden their appeal. We show that a high-speed single-photon detector positioned at a network node can be shared between up to 64 users for exchanging secret keys with the node, thereby significantly reducing the hardware requirements for each user added to the network. This point-to-multipoint architecture removes one of the main obstacles restricting the widespread application of QKD. It presents a viable method for realizing multi-user QKD networks with efficient use of resources, and brings QKD closer to becoming a widespread technology.

  8. Maintenance of family networks

    DEFF Research Database (Denmark)

    marsico, giuseppina; Chaudhary, N; Valsiner, Jaan

    2015-01-01

    collective of persons linked with one another by a flexible social network. Within such networks, Peripheral Communication Patterns set the stage for direct everyday life activities within the family context. Peripheral Communication Patterns are conditions where one family network member (A) communicates...... relatives, ancestors’ spirits, etc.) in efforts that use Peripheral Communication Patterns creates a highly redundant social context for human development over life course which is the basis for family members’ resilience during critical life events. Examples from the social contexts of Greenland, Italy......Families are social units that expand in time (across generations) and space (as a geographically distributed sub-structures of wider kinship networks). Understanding of intergenerational family relations thus requires conceptualization of communication processes that take place within a small...

  9. Friendly network robotics; Friendly network robotics

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This paper summarizes the research results on the friendly network robotics in fiscal 1996. This research assumes an android robot as an ultimate robot and the future robot system utilizing computer network technology. The robot aiming at human daily work activities in factories or under extreme environments is required to work under usual human work environments. The human robot with similar size, shape and functions to human being is desirable. Such robot having a head with two eyes, two ears and mouth can hold a conversation with human being, can walk with two legs by autonomous adaptive control, and has a behavior intelligence. Remote operation of such robot is also possible through high-speed computer network. As a key technology to use this robot under coexistence with human being, establishment of human coexistent robotics was studied. As network based robotics, use of robots connected with computer networks was also studied. In addition, the R-cube (R{sup 3}) plan (realtime remote control robot technology) was proposed. 82 refs., 86 figs., 12 tabs.

  10. Food Targeting: A Real-Time PCR Assay Targeting 16S rDNA for Direct Quantification of Alicyclobacillus spp. Spores after Aptamer-Based Enrichment.

    Science.gov (United States)

    Hünniger, Tim; Felbinger, Christine; Wessels, Hauke; Mast, Sophia; Hoffmann, Antonia; Schefer, Anna; Märtlbauer, Erwin; Paschke-Kratzin, Angelika; Fischer, Markus

    2015-05-06

    Spore-forming Alicyclobacillus spp. are able to form metabolites that induce even in small amounts an antiseptical or medicinal off-flavor in fruit juices. Microbial contaminations could occur by endospores, which overcame the pasteurization process. The current detection method for Alicyclobacillus spp. can take up to 1 week because of microbiological enrichment. In a previous study, DNA aptamers were selected and characterized for an aptamer-driven rapid enrichment of Alicyclobacillus spp. spores from orange juice by magnetic separation. In the present work, a direct quantification assay for Alicyclobacillus spp. spores was developed to complete the two-step approach of enrichment and detection. After mechanical treatment of the spores, the isolated DNA was quantified in a real-time PCR-assay targeting 16S rDNA. The assay was evaluated by the performance requirements of the European Network of Genetically Modified Organisms Laboratories (ENGL). Hence, the presented method is applicable for direct spore detection from orange juice in connection with an enrichment step.

  11. Fixed Access Network Sharing

    Science.gov (United States)

    Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio

    2015-12-01

    Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.

  12. Protected Core Networking Concepts & Challenges

    Science.gov (United States)

    2010-11-01

    Ethernet frames over the un-trusted bearer network. This transformation from PDH/ SDH services to Ethernet Frame based Services (VPWS, VPLS) requires...transformation of legacy bearer networks services (Vxx, PDH, SDH ) to wide area advanced Ethernet Services (VPWS and VPLS). REFERENCES [NNEC]: NATO

  13. Securing underwater wireless communication networks

    OpenAIRE

    Domingo Aladrén, Mari Carmen

    2011-01-01

    Underwater wireless communication networks are particularly vulnerable to malicious attacks due to the high bit error rates, large and variable propagation delays, and low bandwidth of acoustic channels. The unique characteristics of the underwater acoustic communication channel, and the differences between underwater sensor networks and their ground-based counterparts require the development of efficient and reliable security mechanisms. In this article, a compl...

  14. Synthesis and Review: Advancing agricultural greenhouse gas quantification

    Science.gov (United States)

    Olander, Lydia P.; Wollenberg, Eva; Tubiello, Francesco N.; Herold, Martin

    2014-07-01

    Reducing emissions of agricultural greenhouse gases (GHGs), such as methane and nitrous oxide, and sequestering carbon in the soil or in living biomass can help reduce the impact of agriculture on climate change while improving productivity and reducing resource use. There is an increasing demand for improved, low cost quantification of GHGs in agriculture, whether for national reporting to the United Nations Framework Convention on Climate Change (UNFCCC), underpinning and stimulating improved practices, establishing crediting mechanisms, or supporting green products. This ERL focus issue highlights GHG quantification to call attention to our existing knowledge and opportunities for further progress. In this article we synthesize the findings of 21 papers on the current state of global capability for agricultural GHG quantification and visions for its improvement. We conclude that strategic investment in quantification can lead to significant global improvement in agricultural GHG estimation in the near term.

  15. Quantification of Uncertainties in Integrated Spacecraft System Models Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort is to investigate a novel uncertainty quantification (UQ) approach based on non-intrusive polynomial chaos (NIPC) for computationally efficient...

  16. Quantification and Localization of Mast Cells in Periapical Lesions

    African Journals Online (AJOL)

    of the potential range of mast cell function and interactions in. Quantification and .... [4] A great variety of bacterial antigens may stimulate host immune responses ... the thickness of the cyst capsule indicated that they were more prevalent just ...

  17. Extraction, quantification and degree of polymerization of yacon ...

    African Journals Online (AJOL)

    Extraction, quantification and degree of polymerization of yacon (Smallanthus sonchifolia) fructans. EWN da Fonseca Contado, E de Rezende Queiroz, DA Rocha, RM Fraguas, AA Simao, LNS Botelho, A de Fatima Abreu, MABCMP de Abreu ...

  18. Mandibular asymmetry: a three-dimensional quantification of bilateral condyles

    National Research Council Canada - National Science Library

    Lin, Han; Zhu, Ping; Lin, Yi; Wan, Shuangquan; Shu, Xin; Xu, Yue; Zheng, Youhua

    2013-01-01

    .... In this study, a three-dimensional (3-D) quantification of bilateral asymmetrical condyles was firstly conducted to identify the specific role of 3-D condylar configuration for mandibular asymmetry...

  19. Quantification of Uncertainties in Integrated Spacecraft System Models Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective for the Phase II effort will be to develop a comprehensive, efficient, and flexible uncertainty quantification (UQ) framework implemented within a...

  20. Uncertainty Quantification for Production Navier-Stokes Solvers Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The uncertainty quantification methods developed under this program are designed for use with current state-of-the-art flow solvers developed by and in use at NASA....

  1. Quantification of propionic acid from Scutellaria baicalensis roots

    Directory of Open Access Journals (Sweden)

    Eunjung Son

    2017-03-01

    Conclusion: This study is the first to report that propionic acid exists in S. baicalensis roots and also provides a useful ultra performance liquid chromatography analysis method for its quantification.

  2. (1) H-MRS processing parameters affect metabolite quantification

    DEFF Research Database (Denmark)

    Bhogal, Alex A; Schür, Remmelt R; Houtepen, Lotte C

    2017-01-01

    + NAAG/Cr + PCr and Glu/Cr + PCr, respectively. Metabolite quantification using identical (1) H-MRS data was influenced by processing parameters, basis sets and software choice. Locally preferred processing choices affected metabolite quantification, even when using identical software. Our results....... It is currently unknown to what extent variations in the analysis pipeline used to quantify (1) H-MRS data affect outcomes. The purpose of this study was to evaluate whether the quantification of identical (1) H-MRS scans across independent and experienced research groups would yield comparable results. We...... investigated the influence of model parameters and spectral quantification software on fitted metabolite concentration values. Sixty spectra in 30 individuals (repeated measures) were acquired using a 7-T MRI scanner. Data were processed by four independent research groups with the freedom to choose their own...

  3. A quantification model for the structure of clay materials.

    Science.gov (United States)

    Tang, Liansheng; Sang, Haitao; Chen, Haokun; Sun, Yinlei; Zhang, Longjian

    2016-07-04

    In this paper, the quantification for clay structure is explicitly explained, and the approach and goals of quantification are also discussed. The authors consider that the purpose of the quantification for clay structure is to determine some parameters that can be used to quantitatively characterize the impact of clay structure on the macro-mechanical behaviour. According to the system theory and the law of energy conservation, a quantification model for the structure characteristics of clay materials is established and three quantitative parameters (i.e., deformation structure potential, strength structure potential and comprehensive structure potential) are proposed. And the corresponding tests are conducted. The experimental results show that these quantitative parameters can accurately reflect the influence of clay structure on the deformation behaviour, strength behaviour and the relative magnitude of structural influence on the above two quantitative parameters, respectively. These quantitative parameters have explicit mechanical meanings, and can be used to characterize the structural influences of clay on its mechanical behaviour.

  4. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Hansen, Jonas; Roetter, Daniel Enrique Lucani; Krigslund, Jeppe

    2015-01-01

    . The inherent flexibility of both SDN and NC provides fertile ground to envision more efficient, robust, and secure networking designs, which may also incorporate content caching and storage, all of which are key challenges of the upcoming 5G networks. This article not only proposes the fundamentals......Software defined networking has garnered large attention due to its potential to virtualize services in the Internet, introducing flexibility in the buffering, scheduling, processing, and routing of data in network routers. SDN breaks the deadlock that has kept Internet network protocols stagnant...... for decades, while applications and physical links have evolved. This article advocates for the use of SDN to bring about 5G network services by incorporating network coding (NC) functionalities. The latter constitutes a major leap forward compared to the state-of-the- art store and forward Internet paradigm...

  5. Network Coded Software Defined Networking

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Hansen, Jonas; Roetter, Daniel Enrique Lucani

    2015-01-01

    Software Defined Networking (SDN) and Network Coding (NC) are two key concepts in networking that have garnered a large attention in recent years. On the one hand, SDN's potential to virtualize services in the Internet allows a large flexibility not only for routing data, but also to manage....... This paper advocates for the use of SDN to bring about future Internet and 5G network services by incorporating network coding (NC) functionalities. The inherent flexibility of both SDN and NC provides a fertile ground to envision more efficient, robust, and secure networking designs, that may also...... incorporate content caching and storage, all of which are key challenges of the future Internet and the upcoming 5G networks. This paper proposes some of the keys behind this intersection and supports it with use cases as well as a an implementation that integrated the Kodo library (NC) into OpenFlow (SDN...

  6. Taiwan Automated Telescope Network

    Directory of Open Access Journals (Sweden)

    Dean-Yi Chou

    2010-01-01

    can be operated either interactively or fully automatically. In the interactive mode, it can be controlled through the Internet. In the fully automatic mode, the telescope operates with preset parameters without any human care, including taking dark frames and flat frames. The network can also be used for studies that require continuous observations for selected objects.

  7. Quantification of aortic regurgitation by magnetic resonance velocity mapping

    DEFF Research Database (Denmark)

    Søndergaard, Lise; Lindvig, K; Hildebrandt, P

    1993-01-01

    The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients, and the regurgit......The use of magnetic resonance (MR) velocity mapping in the quantification of aortic valvular blood flow was examined in 10 patients with angiographically verified aortic regurgitation. MR velocity mapping succeeded in identifying and quantifying the regurgitation in all patients...

  8. Quantification of propionic acid from Scutellaria baicalensis roots

    OpenAIRE

    Eunjung Son; Ho Kyoung Kim; Hyun Sik Kim; Mee Ree Kim; Dong-Seon Kim

    2017-01-01

    Background: Propionic acid is a widely used preservative and has been mainly formed by artificial synthesis or fermentation. In the case of natural products, the presence of propionic acid is viewed as a sign that an additive has been introduced for antimicrobial effects. Methods: In this work, the propionic acid that occurs in Scutellaria baicalensis roots was studied. A quantification method was developed, validated, and showed good linearity, low limit of detection, and limit of quantif...

  9. Unified broadcast in sensor networks

    DEFF Research Database (Denmark)

    Hansen, Morten Tranberg; Jurdak, Raja; Kusy, Branislav

    2011-01-01

    Complex sensor network applications include multiple services such as collection, dissemination, time synchronization, and failure detection protocols. Many of these protocols require local state maintenance through periodic broadcasts which leads to high control overhead. Recent attempts...

  10. Network virtualization as enabler for cloud networking

    OpenAIRE

    Turull, Daniel

    2016-01-01

    The Internet has exponentially grown and now it is part of our everyday life. Internet services and applications rely on back-end servers that are deployed on local servers and data centers. With the growing use of data centers and cloud computing, the locations of these servers have been externalized and centralized, taking advantage of economies of scale. However, some applications need to define complex network topologies and require more than simple connectivity to the remote sites. Ther...

  11. Cross recurrence quantification for cover song identification

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Joan; Serra, Xavier; Andrzejak, Ralph G [Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, 08018 Barcelona (Spain)], E-mail: joan.serraj@upf.edu

    2009-09-15

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Roessler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  12. Quantification of variability in trichome patterns

    Directory of Open Access Journals (Sweden)

    Bettina eGreese

    2014-11-01

    Full Text Available While pattern formation is studied in various areas of biology, little is known about the intrinsic noise leading to variations between individual realizations of the pattern. One prominent example for de novo pattern formation in plants is the patterning of trichomes on Arabidopsis leaves, which involves genetic regulation and cell-to-cell communication. These processes are potentially variable due to , e.g., the abundance of cell components or environmental conditions. To elevate the understanding of the regulatory processes underlying the pattern formation it is crucial to quantitatively analyze the variability in naturally occurring patterns. Here, we review recent approaches towards characterization of noise on trichome initiation. We present methods for the quantification of spatial patterns, which are the basis for data-driven mathematical modeling and enable the analysis of noise from different sources. Besides the insight gained on trichome formation, the examination of observed trichome patterns also shows that highly regulated biological processes can be substantially affected by variability.

  13. Characterization and quantification of biochar alkalinity.

    Science.gov (United States)

    Fidel, Rivka B; Laird, David A; Thompson, Michael L; Lawrinenko, Michael

    2017-01-01

    Lack of knowledge regarding the nature of biochar alkalis has hindered understanding of pH-sensitive biochar-soil interactions. Here we investigate the nature of biochar alkalinity and present a cohesive suite of methods for its quantification. Biochars produced from cellulose, corn stover and wood feedstocks had significant low-pKa organic structural (0.03-0.34 meq g(-1)), other organic (0-0.92 meq g(-1)), carbonate (0.02-1.5 meq g(-1)), and other inorganic (0-0.26 meq g(-1)) alkalinities. All four categories of biochar alkalinity contributed to total biochar alkalinity and are therefore relevant to pH-sensitive soil processes. Total biochar alkalinity was strongly correlated with base cation concentration, but biochar alkalinity was not a simple function of elemental composition, soluble ash, fixed carbon, or volatile matter content. More research is needed to characterize soluble biochar alkalis other than carbonates and to establish predictive relationships among biochar production parameters and the composition of biochar alkalis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Uncertainty quantification in flood risk assessment

    Science.gov (United States)

    Blöschl, Günter; Hall, Julia; Kiss, Andrea; Parajka, Juraj; Perdigão, Rui A. P.; Rogger, Magdalena; Salinas, José Luis; Viglione, Alberto

    2017-04-01

    Uncertainty is inherent to flood risk assessments because of the complexity of the human-water system, which is characterised by nonlinearities and interdependencies, because of limited knowledge about system properties and because of cognitive biases in human perception and decision-making. On top of the uncertainty associated with the assessment of the existing risk to extreme events, additional uncertainty arises because of temporal changes in the system due to climate change, modifications of the environment, population growth and the associated increase in assets. Novel risk assessment concepts are needed that take into account all these sources of uncertainty. They should be based on the understanding of how flood extremes are generated and how they change over time. They should also account for the dynamics of risk perception of decision makers and population in the floodplains. In this talk we discuss these novel risk assessment concepts through examples from Flood Frequency Hydrology, Socio-Hydrology and Predictions Under Change. We believe that uncertainty quantification in flood risk assessment should lead to a robust approach of integrated flood risk management aiming at enhancing resilience rather than searching for optimal defense strategies.

  15. Quantification of biological aging in young adults.

    Science.gov (United States)

    Belsky, Daniel W; Caspi, Avshalom; Houts, Renate; Cohen, Harvey J; Corcoran, David L; Danese, Andrea; Harrington, HonaLee; Israel, Salomon; Levine, Morgan E; Schaefer, Jonathan D; Sugden, Karen; Williams, Ben; Yashin, Anatoli I; Poulton, Richie; Moffitt, Terrie E

    2015-07-28

    Antiaging therapies show promise in model organism research. Translation to humans is needed to address the challenges of an aging global population. Interventions to slow human aging will need to be applied to still-young individuals. However, most human aging research examines older adults, many with chronic disease. As a result, little is known about aging in young humans. We studied aging in 954 young humans, the Dunedin Study birth cohort, tracking multiple biomarkers across three time points spanning their third and fourth decades of life. We developed and validated two methods by which aging can be measured in young adults, one cross-sectional and one longitudinal. Our longitudinal measure allows quantification of the pace of coordinated physiological deterioration across multiple organ systems (e.g., pulmonary, periodontal, cardiovascular, renal, hepatic, and immune function). We applied these methods to assess biological aging in young humans who had not yet developed age-related diseases. Young individuals of the same chronological age varied in their "biological aging" (declining integrity of multiple organ systems). Already, before midlife, individuals who were aging more rapidly were less physically able, showed cognitive decline and brain aging, self-reported worse health, and looked older. Measured biological aging in young adults can be used to identify causes of aging and evaluate rejuvenation therapies.

  16. Legionella spp. isolation and quantification from greywater.

    Science.gov (United States)

    Rodríguez-Martínez, Sara; Blanky, Marina; Friedler, Eran; Halpern, Malka

    2015-01-01

    Legionella, an opportunistic human pathogen whose natural environment is water, is transmitted to humans through inhalation of contaminated aerosols. Legionella has been isolated from a high diversity of water types. Due its importance as a pathogen, two ISO protocols have been developed for its monitoring. However, these two protocols are not suitable for analyzing Legionella in greywater (GW). GW is domestic wastewater excluding the inputs from toilets and kitchen. It can serve as an alternative water source, mainly for toilet flushing and garden irrigation; both producing aerosols that can cause a risk for Legionella infection. Hence, before reuse, GW has to be treated and its quality needs to be monitored. The difficulty of Legionella isolation from GW strives in the very high load of contaminant bacteria. Here we describe a modification of the ISO protocol 11731:1998 that enables the isolation and quantification of Legionella from GW samples. The following modifications were made:•To enable isolation of Legionella from greywater, a pre-filtration step that removes coarse matter is recommended.•Legionella can be isolated after a combined acid-thermic treatment that eliminates the high load of contaminant bacteria in the sample.

  17. Cross recurrence quantification for cover song identification

    Science.gov (United States)

    Serrà, Joan; Serra, Xavier; Andrzejak, Ralph G.

    2009-09-01

    There is growing evidence that nonlinear time series analysis techniques can be used to successfully characterize, classify, or process signals derived from real-world dynamics even though these are not necessarily deterministic and stationary. In the present study, we proceed in this direction by addressing an important problem our modern society is facing, the automatic classification of digital information. In particular, we address the automatic identification of cover songs, i.e. alternative renditions of a previously recorded musical piece. For this purpose, we here propose a recurrence quantification analysis measure that allows the tracking of potentially curved and disrupted traces in cross recurrence plots (CRPs). We apply this measure to CRPs constructed from the state space representation of musical descriptor time series extracted from the raw audio signal. We show that our method identifies cover songs with a higher accuracy as compared to previously published techniques. Beyond the particular application proposed here, we discuss how our approach can be useful for the characterization of a variety of signals from different scientific disciplines. We study coupled Rössler dynamics with stochastically modulated mean frequencies as one concrete example to illustrate this point.

  18. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  19. On uncertainty quantification in hydrogeology and hydrogeophysics

    Science.gov (United States)

    Linde, Niklas; Ginsbourger, David; Irving, James; Nobile, Fabio; Doucet, Arnaud

    2017-12-01

    Recent advances in sensor technologies, field methodologies, numerical modeling, and inversion approaches have contributed to unprecedented imaging of hydrogeological properties and detailed predictions at multiple temporal and spatial scales. Nevertheless, imaging results and predictions will always remain imprecise, which calls for appropriate uncertainty quantification (UQ). In this paper, we outline selected methodological developments together with pioneering UQ applications in hydrogeology and hydrogeophysics. The applied mathematics and statistics literature is not easy to penetrate and this review aims at helping hydrogeologists and hydrogeophysicists to identify suitable approaches for UQ that can be applied and further developed to their specific needs. To bypass the tremendous computational costs associated with forward UQ based on full-physics simulations, we discuss proxy-modeling strategies and multi-resolution (Multi-level Monte Carlo) methods. We consider Bayesian inversion for non-linear and non-Gaussian state-space problems and discuss how Sequential Monte Carlo may become a practical alternative. We also describe strategies to account for forward modeling errors in Bayesian inversion. Finally, we consider hydrogeophysical inversion, where petrophysical uncertainty is often ignored leading to overconfident parameter estimation. The high parameter and data dimensions encountered in hydrogeological and geophysical problems make UQ a complicated and important challenge that has only been partially addressed to date.

  20. Cloud Radio Access Network architecture. Towards 5G mobile networks

    DEFF Research Database (Denmark)

    Checko, Aleksandra

    Cloud Radio Access Network (C-RAN) is a novel mobile network architecture which can address a number of challenges that mobile operators face while trying to support ever-growing end-users’ needs towards 5th generation of mobile networks (5G). The main idea behind C-RAN is to split the base......, and for the analyzed scenario it can assure synchronization on the nanosecond level, fulfilling mobile network requirements. Furthermore, mechanisms to lower delay and jitter have been identified, namely: source scheduling and preemption. An innovative source scheduling scheme which can minimize jitter has been...

  1. Telecommunication Networks

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Balachandran, Kartheepan; Hald, Sara Ligaard

    2014-01-01

    In this chapter, we look into the role of telecommunication networks and their capability of supporting critical infrastructure systems and applications. The focus is on smart grids as the key driving example, bearing in mind that other such systems do exist, e.g., water management, traffic control......, etc. First, the role of basic communication is examined with a focus on critical infrastructures. We look at heterogenic networks and standards for smart grids, to give some insight into what has been done to ensure inter-operability in this direction. We then go to the physical network, and look...... at the deployment of the physical layout of the communication network and the related costs. This is an important aspect as one option to use existing networks is to deploy dedicated networks. Following this, we look at some generic models that describe reliability for accessing dynamic information. This part...

  2. Networked Identities

    DEFF Research Database (Denmark)

    Ryberg, Thomas; Larsen, Malene Charlotte

    2008-01-01

    In this article we take up a critique of the concept of Communities of Practice (CoP) voiced by several authors, who suggest that networks may provide a better metaphor to understand social forms of organisation and learning. Through a discussion of the notion of networked learning and the critique...... of CoPs we shall argue that the metaphor or theory of networked learning is itself confronted with some central tensions and challenges that need to be addressed. We then explore these theoretical and analytic challenges to the network metaphor, through an analysis of a Danish social networking site. We...... argue that understanding meaning-making and ‘networked identities’ may be relevant analytic entry points in navigating the challenges....

  3. In-situ hybridization based quantification of hTR: a possible biomarker in malignant melanoma

    DEFF Research Database (Denmark)

    Vagner, Josephine; Steiniche, Torben; Stougaard, Magnus

    2015-01-01

    Aims Telomerase is reactivated in most cancers and there is accumulating evidence for it being a driver event in malignant melanoma (MM). Thus, our aim was to evaluate if in situ hybridisation (ISH)-based quantification of the telomerase RNA (hTR) could be used to distinguish MM from nevi...... thickness suggesting that hTR might be a valuable biomarker in MM. Furthermore, as ISH-based detection requires presence of both hTR and the reverse transcriptase (hTERT) it might be an indicator of active telomerase and thus have future relevance as a predictive biomarker for anti-telomerase treatment....

  4. qPCR as a powerful tool for microbial food spoilage quantification: Significance for food quality

    OpenAIRE

    Martínez Álvarez, Noelia; Martín, M. Cruz; Herrero, Ana; Fernández García, María; Álvarez González, Miguel Ángel; Ladero Losada, Víctor Manuel

    2011-01-01

    The use of real time quantitative PCR (qPCR) has recently been extended to food science. The literature has mainly focused on its use in ensuring food safety. However, it offers a number of advantages with respect to the quantification of non-pathogenic food spoilage microorganisms. Indeed, qPCR may have a promising future in improving the quality of food products. The present review examines the use of qPCR in this area, the basis of the technique, the requirements that must be met for optim...

  5. Power plant intake quantification of wheat straw composition for 2nd generation bioethanol optimization

    DEFF Research Database (Denmark)

    Lomborg, Carina J.; Thomsen, Mette Hedegaard; Jensen, Erik Steen

    2010-01-01

    Optimization of 2nd generation bioethanol production from wheat straw requires comprehensive knowledge of plant intake feedstock composition. Near Infrared Spectroscopy is evaluated as a potential method for instantaneous quantification of the salient fermentation wheat straw components: cellulose...... (glucan), hemicelluloses (xylan, arabinan), and lignin. Aiming at chemometric multivariate calibration, 44 pre-selected samples were subjected to spectroscopy and reference analysis. For glucan and xylan prediction accuracies (slope: 0.89, 0.94) and precisions (r2: 0.87) were obtained, corresponding...

  6. Wireless Networks

    OpenAIRE

    Samaka, Mohammed; Khan, Khaled M.D.

    2007-01-01

    Wireless communication is the fastest-growing field in the telecommunication industry. Wireless networks have grown significantly as an important segment of the communications industry. They have become popular networks with the potential to provide high-speed, high-quality information exchange between two or more portable devices without any wire or conductors. Wireless networks can simply be characterized as the technology that provides seamless access to information, anywhere, anyplace, an...

  7. Enterpreneurial network

    OpenAIRE

    Thoma, Antonela; Nguyen, Lien; Kupsyte, Valdone

    2014-01-01

    Network has become more and more indispensable in the entrepreneurial world. Especially in startup businesses, network is crucial for new entrepreneurs. This project looks at how entrepreneurs in different sectors use network to become successful. We chose to work with three entrepreneurs from three companies that have been operational for a few years and conducted face to face interviews with them. Through the data from the interviews, we analyzed firstly what type of entrepreneurs they are,...

  8. Klasifikasi Paket Jaringan Berbasis Analisis Statistik dan Neural Network

    Directory of Open Access Journals (Sweden)

    Harsono Harsono

    2018-01-01

    Full Text Available Distributed Denial-of-Service (DDoS is one of network attack technique which increased every year, especially in both of intensity and volume. DDoS attacks are still one of the world's major Internet threats and become a major problem of cyber-world security. Research in this paper aims to establish a new approach on network packets classification, which can be a basis for framework development on Distributed Denial-of-Service (DDoS attack detection systems. The proposed approach to solving the problem on network packet classification is by combining statistical data quantification methods with neural network methods. Based on the test, it is found that the average percentage of neural network classification accuracy against network data packet is 92.99%.

  9. Reconstruction of stochastic temporal networks through diffusive arrival times

    Science.gov (United States)

    Li, Xun; Li, Xiang

    2017-01-01

    Temporal networks have opened a new dimension in defining and quantification of complex interacting systems. Our ability to identify and reproduce time-resolved interaction patterns is, however, limited by the restricted access to empirical individual-level data. Here we propose an inverse modelling method based on first-arrival observations of the diffusion process taking place on temporal networks. We describe an efficient coordinate-ascent implementation for inferring stochastic temporal networks that builds in particular but not exclusively on the null model assumption of mutually independent interaction sequences at the dyadic level. The results of benchmark tests applied on both synthesized and empirical network data sets confirm the validity of our algorithm, showing the feasibility of statistically accurate inference of temporal networks only from moderate-sized samples of diffusion cascades. Our approach provides an effective and flexible scheme for the temporally augmented inverse problems of network reconstruction and has potential in a broad variety of applications. PMID:28604687

  10. Monitoring and control requirement definition study for Dispersed Storage and Generation (DSG). Volume 4, appendix C: Identification from utility visits of present and future approaches to integration of DSG into distribution networks

    Science.gov (United States)

    1980-01-01

    Visits to four utilities concerned with the use of DSG power sources on their distribution networks yielded useful impressions of present and future approaches to the integration of DSGs into electrical distribution network. Different approaches to future utility systems with DSG are beginning to take shape. The new DSG sources will be in decentralized locations with some measure of centralized control. The utilities have yet to establish firmly the communication and control means or their organization. For the present, the means for integrating the DSGs and their associated monitoring and control equipment into a unified system have not been decided.

  11. Network security

    CERN Document Server

    Perez, André

    2014-01-01

    This book introduces the security mechanisms deployed in Ethernet, Wireless-Fidelity (Wi-Fi), Internet Protocol (IP) and MultiProtocol Label Switching (MPLS) networks. These mechanisms are grouped throughout the book according to the following four functions: data protection, access control, network isolation, and data monitoring. Data protection is supplied by data confidentiality and integrity control services. Access control is provided by a third-party authentication service. Network isolation is supplied by the Virtual Private Network (VPN) service. Data monitoring consists of applying

  12. Networking Japan

    DEFF Research Database (Denmark)

    Hansen, Annette Skovsted

    HIDA). Many of these alumni have and will in the future exchange ideas and keep contact not only to Japan, but also to fellow alumni around the globe and, thereby, practice south-south exchanges, which are made possible and traceable by their established alumni network and the World Network of Friends...... (WNF). Through the alumni network, Japan continues to infuse ideas to participants and alumni, who interpret and disseminate these ideas through alumni society networks and activities, but their discussions nationally and regionally also get reported back to Japan and affect future policies...

  13. Technical Network

    CERN Multimedia

    2007-01-01

    In order to optimize the management of the Technical Network (TN), to ease the understanding and purpose of devices connected to the TN, and to improve security incident handling, the Technical Network Administrators and the CNIC WG have asked IT/CS to verify the "description" and "tag" fields of devices connected to the TN. Therefore, persons responsible for systems connected to the TN will receive email notifications from IT/CS asking them to add the corresponding information in the network database. Thank you very much for your cooperation. The Technical Network Administrators & the CNIC WG

  14. Chip-Oriented Fluorimeter Design and Detection System Development for DNA Quantification in Nano-Liter Volumes

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2009-12-01

    Full Text Available The chip-based polymerase chain reaction (PCR system has been developed in recent years to achieve DNA quantification. Using a microstructure and miniature chip, the volume consumption for a PCR can be reduced to a nano-liter. With high speed cycling and a low reaction volume, the time consumption of one PCR cycle performed on a chip can be reduced. However, most of the presented prototypes employ commercial fluorimeters which are not optimized for fluorescence detection of such a small quantity sample. This limits the performance of DNA quantification, especially low experiment reproducibility. This study discusses the concept of a chip-oriented fluorimeter design. Using the analytical model, the current study analyzes the sensitivity and dynamic range of the fluorimeter to fit the requirements for detecting fluorescence in nano-liter volumes. Through the optimized processes, a real-time PCR on a chip system with only one nano-liter volume test sample is as sensitive as the commercial real-time PCR machine using the sample with twenty micro-liter volumes. The signal to noise (S/N ratio of a chip system for DNA quantification with hepatitis B virus (HBV plasmid samples is 3 dB higher. DNA quantification by the miniature chip shows higher reproducibility compared to the commercial machine with respect to samples of initial concentrations from 103 to 105 copies per reaction.

  15. Chip-oriented fluorimeter design and detection system development for DNA quantification in nano-liter volumes.

    Science.gov (United States)

    Lee, Da-Sheng; Chen, Ming-Hui

    2010-01-01

    The chip-based polymerase chain reaction (PCR) system has been developed in recent years to achieve DNA quantification. Using a microstructure and miniature chip, the volume consumption for a PCR can be reduced to a nano-liter. With high speed cycling and a low reaction volume, the time consumption of one PCR cycle performed on a chip can be reduced. However, most of the presented prototypes employ commercial fluorimeters which are not optimized for fluorescence detection of such a small quantity sample. This limits the performance of DNA quantification, especially low experiment reproducibility. This study discusses the concept of a chip-oriented fluorimeter design. Using the analytical model, the current study analyzes the sensitivity and dynamic range of the fluorimeter to fit the requirements for detecting fluorescence in nano-liter volumes. Through the optimized processes, a real-time PCR on a chip system with only one nano-liter volume test sample is as sensitive as the commercial real-time PCR machine using the sample with twenty micro-liter volumes. The signal to noise (S/N) ratio of a chip system for DNA quantification with hepatitis B virus (HBV) plasmid samples is 3 dB higher. DNA quantification by the miniature chip shows higher reproducibility compared to the commercial machine with respect to samples of initial concentrations from 10(3) to 10(5) copies per reaction.

  16. Pure hydroxyapatite phantoms for the calibration of in vivo X-ray fluorescence systems of bone lead and strontium quantification.

    Science.gov (United States)

    Da Silva, Eric; Kirkham, Brian; Heyd, Darrick V; Pejović-Milić, Ana

    2013-10-01

    Plaster of Paris [poP, CaSO4·(1)/(2) H2O] is the standard phantom material used for the calibration of in vivo X-ray fluorescence (IVXRF)-based systems of bone metal quantification (i.e bone strontium and lead). Calibration of IVXRF systems of bone metal quantification employs the use of a coherent normalization procedure which requires the application of a coherent correction factor (CCF) to the data, calculated as the ratio of the relativistic form factors of the phantom material and bone mineral. Various issues have been raised as to the suitability of poP for the calibration of IVXRF systems of bone metal quantification which include its chemical purity and its chemical difference from bone mineral (a calcium phosphate). This work describes the preparation of a chemically pure hydroxyapatite phantom material, of known composition and stoichiometry, proposed for the purpose of calibrating IVXRF systems of bone strontium and lead quantification as a replacement for poP. The issue with contamination by the analyte was resolved by preparing pure Ca(OH)2 by hydroxide precipitation, which was found to bring strontium and lead levels to crystal structure of the material was found to be similar to that of the bone mineral component of NIST SRM 1486 (bone meal), as determined by powder X-ray diffraction spectrometry.

  17. Atomic force microscopy applied to the quantification of nano-precipitates in thermo-mechanically treated microalloyed steels

    Energy Technology Data Exchange (ETDEWEB)

    Renteria-Borja, Luciano [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Hurtado-Delgado, Eduardo, E-mail: hurtado@itmorelia.edu.mx [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Garnica-Gonzalez, Pedro [Instituto Tecnologico de Morelia, Av. Tecnologico No. 1500, Lomas de Santiaguito, 58120 Morelia (Mexico); Dominguez-Lopez, Ivan; Garcia-Garcia, Adrian Luis [Centro de Investigacion en Ciencia Aplicada y Tecnologia Avanzada-IPN Unidad Queretaro, Cerro Blanco No. 141, Colinas del Cimatario, 76090 Queretaro (Mexico)

    2012-07-15

    Quantification of nanometer-size precipitates in microalloyed steels has been traditionally performed using transmission electron microscopy (TEM), in spite of its complicated sample preparation procedures, prone to preparation errors and sample perturbation. In contrast to TEM procedures, atomic force microscopy (AFM) is performed on the as-prepared specimen, with sample preparation requirements similar to those for optical microscopy (OM), rendering three-dimensional representations of the sample surface with vertical resolution of a fraction of a nanometer. In AFM, contrast mechanisms are directly related to surface properties such as topography, adhesion, and stiffness, among others. Chemical etching was performed using 0.5% nital, at time intervals between 4 and 20 s, in 4 s steps, until reaching the desired surface finish. For the present application, an average surface-roughness peak-height below 200 nm was sought. Quantification results of nanometric precipitates were obtained from the statistical analysis of AFM images of the microstructure developed by microalloyed Nb and V-Mo steels. Topography and phase contrast AFM images were used for quantification. The results obtained using AFM are consistent with similar TEM reports. - Highlights: Black-Right-Pointing-Pointer We quantified nanometric precipitates in Nb and V-Mo microalloyed steels using AFM. Black-Right-Pointing-Pointer Microstructures of the thermo-mechanically treated microalloyed steels were used. Black-Right-Pointing-Pointer Topography and phase contrast AFM images were used for quantification. Black-Right-Pointing-Pointer AFM results are comparable with traditionally obtained TEM measurements.

  18. Artificial neural networks based controller for glucose monitoring during clamp test.

    Directory of Open Access Journals (Sweden)

    Merav Catalogna

    Full Text Available Insulin resistance (IR is one of the most widespread health problems in modern times. The gold standard for quantification of IR is the hyperinsulinemic-euglycemic glucose clamp technique. During the test, a regulated glucose infusion is delivered intravenously to maintain a constant blood glucose concentration. Current control algorithms for regulating this glucose infusion are based on feedback control. These models require frequent sampling of blood, and can only partly capture the complexity associated with regulation of glucose. Here we present an improved clamp control algorithm which is motivated by the stochastic nature of glucose kinetics, while using the minimal need in blood samples required for evaluation of IR. A glucose pump control algorithm, based on artificial neural networks model was developed. The system was trained with a data base collected from 62 rat model experiments, using a back-propagation Levenberg-Marquardt optimization. Genetic algorithm was used to optimize network topology and learning features. The predictive value of the proposed algorithm during the temporal period of interest was significantly improved relative to a feedback control applied at an equivalent low sampling interval. Robustness to noise analysis demonstrates the applicability of the algorithm in realistic situations.

  19. A computer-aided detection system for rheumatoid arthritis MRI data interpretation and quantification of synovial activity

    DEFF Research Database (Denmark)

    Kubassove, Olga; Boesen, Mikael; Cimmino, Marco A

    2009-01-01

    RATIONAL AND OBJECTIVE: Disease assessment and follow-up of rheumatoid arthritis (RA) patients require objective evaluation and quantification. Magnetic resonance imaging (MRI) has a large potential to supplement such information for the clinician, however, time spent on data reading and interpre......, Dynamika-RA, which incorporates efficient data processing and analysis techniques.......RATIONAL AND OBJECTIVE: Disease assessment and follow-up of rheumatoid arthritis (RA) patients require objective evaluation and quantification. Magnetic resonance imaging (MRI) has a large potential to supplement such information for the clinician, however, time spent on data reading...... and interpretation slow down development in this area. Existing scoring systems of especially synovitis are too rigid and insensitive to measure early treatment response and quantify inflammation. This study tested a novel automated, computer system for analysis of dynamic MRI data acquired from patients with RA...

  20. The dimensionality of ecological networks

    DEFF Research Database (Denmark)

    Eklöf, Anna; Jacob, Ute; Kopp, Jason

    2013-01-01

    How many dimensions (trait-axes) are required to predict whether two species interact? This unanswered question originated with the idea of ecological niches, and yet bears relevance today for understanding what determines network structure. Here, we analyse a set of 200 ecological networks...... the most to explaining network structure. We show that accounting for a few traits dramatically improves our understanding of the structure of ecological networks. Matching traits for resources and consumers, for example, fruit size and bill gape, are the most successful combinations. These results link...... ecologically important species attributes to large-scale community structure....

  1. Y-Source Impedance Network

    DEFF Research Database (Denmark)

    Siwakoti, Yam Prasad; Loh, Poh Chiang; Blaabjerg, Frede

    2014-01-01

    This letter introduces a new versatile Y-shaped impedance network for realizing converters that demand a very high-voltage gain, while using a small duty ratio. To achieve that, the proposed network uses a tightly coupled transformer with three windings, whose obtained gain is presently not matched...... by existing networks operated at the same duty ratio. The proposed impedance network also has more degrees of freedom for varying its gain, and hence, more design freedom for meeting requirements demanded from it. This capability has been demonstrated by mathematical derivation, and proven in experiment...

  2. Reproducing and Extending Real Testbed Evaluation of GeoNetworking Implementation in Simulated Networks

    OpenAIRE

    Tao, Ye; Tsukada, Manabu; LI, Xin; Kakiuchi, Masatoshi; Esaki, Hiroshi

    2016-01-01

    International audience; Vehicular Ad-hoc Network (VANET) is a type of Mobile Ad-hoc Network (MANET) which is specialized for vehicle communication. GeoNetworking is a new standardized network layer protocol for VANET which employs geolocation based routing. However, conducting large scale experiments in GeoNetworking softwares is extremely difficult, since it requires many extra factors such as vehicles, stuff, place, terrain, etc. In this paper, we propose a method to reproduce realistic res...

  3. An Inter-Networking Mechanism with Stepwise Synchronization for Wireless Sensor Networks

    OpenAIRE

    Masayuki Murata; Naoki Wakamiya; Hiroshi Yamamoto

    2011-01-01

    To realize the ambient information society, multiple wireless networks deployed in the region and devices carried by users are required to cooperate with each other. Since duty cycles and operational frequencies are different among networks, we need a mechanism to allow networks to efficiently exchange messages. For this purpose, we propose a novel inter-networking mechanism where two networks are synchronized with each other in a moderate manner, which we call stepwise synchronization. With ...

  4. Network Slicing Based 5G and Future Mobile Networks: Mobility, Resource Management, and Challenges

    OpenAIRE

    Zhang, H.; Liu, N.; Chu, X; Long, K.; Aghvami, A.; Leung, V. C. M.

    2017-01-01

    The fifth-generation (5G) networks are expected to be able to satisfy users' different quality-of-service (QoS) requirements. Network slicing is a promising technology for 5G networks to provide services tailored for users' specific QoS demands. Driven by the increased massive wireless data traffic from different application scenarios, efficient resource allocation schemes should be exploited to improve the flexibility of network resource allocation and capacity of 5G networks based on networ...

  5. Research of ad hoc network based on SINCGARS network

    Science.gov (United States)

    Nie, Hao; Cai, Xiaoxia; Chen, Hong; Chen, Jian; Weng, Pengfei

    2016-03-01

    In today's world, science and technology make a spurt of progress, so society has entered the era of information technology, network. Only the comprehensive use of electronic warfare and network warfare means can we maximize their access to information and maintain the information superiority. Combined with the specific combat mission and operational requirements, the research design and construction in accordance with the actual military which are Suitable for the future of information technology needs of the tactical Adhoc network, tactical internet, will greatly improve the operational efficiency of the command of the army. Through the study of the network of the U.S. military SINCGARS network, it can explore the routing protocol and mobile model, to provide a reference for the research of our army network.

  6. DEVELOPMENT OF A SECOND TYPE ELECTRODE BASED ON THE SILVER/SILVER IBUPROFENATE PAIR FOR IBUPROFEN QUANTIFICATION IN PHARMACEUTICAL SAMPLES

    OpenAIRE

    Selene I. Rivera-Hernández; Giaan A. Álvarez-Romero; Silvia Corona-Avendaño; M. Elena Páez-Hernández; Carlos A Galán-Vidal; Mario Romero-Romo; María Teresa Ramírez-Silva

    2017-01-01

    Ibuprofen is a widely used pharmaceutical because of its therapeutic properties; it is considered a safe medicament, thus it does not require medical prescription to be sold. However, in order to ensure consumer's health it is indispensable that the pharmaceutical industry relies on analytic methods for its quantification. Potentiometry has proven to be a successful technique using second type electrodes, which in agreement with Nernst's equation can detect anions activity. On consideration o...

  7. Online fuzzy voltage collapse risk quantification

    Energy Technology Data Exchange (ETDEWEB)

    Berizzi, A.; Bovo, C.; Delfanti, M.; Merlo, M. [Politecnico di Milano, 20133 Milano (Italy); Cirio, D. [CESI Ricerca (Italy); Pozzi, M. [CESI (Italy)

    2009-05-15

    Many voltage stability indicators have been proposed in the past for the voltage collapse assessment. Almost all of them are determined through quite complex analytical tools; therefore, it is difficult for system operators to give them a physical meaning. In order to perform a simple and reliable evaluation of the security margins, it is necessary to make a synthesis of the information given by the various indices. The present work proposes an Artificial Intelligence-based tool for the evaluation of the voltage security. In particular, a Fuzzy Inference Engine is developed and optimized by two different approaches (Neural Networks and Genetic Algorithms). Starting from the state estimation, a given set of mathematical indices is computed to represent a snapshot of the current electric system operating point. The numerical values are then translated into a set of symbolic and linguistic quantities that are manipulated through a set of logical connectives and Inference Methods provided by the mathematical logic. As a result, the Fuzzy Logic gives a MW measure of the distance from the collapse limit, a metric usually appreciated by system operators. The Fuzzy System has been built and optimized by using, as a test system, a detailed model of the EHV Italian transmission network connected to an equivalent of the UCTE network (about 1700 buses). (author)

  8. Quantification and Propagation of Nuclear Data Uncertainties

    Science.gov (United States)

    Rising, Michael E.

    The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output

  9. Quantification of isotopic turnover in agricultural systems

    Science.gov (United States)

    Braun, A.; Auerswald, K.; Schnyder, H.

    2012-04-01

    The isotopic turnover, which is a proxy for the metabolic rate, is gaining scientific importance. It is quantified for an increasing range of organisms, from microorganisms over plants to animals including agricultural livestock. Additionally, the isotopic turnover is analyzed on different scales, from organs to organisms to ecosystems and even to the biosphere. In particular, the quantification of the isotopic turnover of specific tissues within the same organism, e.g. organs like liver and muscle and products like milk and faeces, has brought new insights to improve understanding of nutrient cycles and fluxes, respectively. Thus, the knowledge of isotopic turnover is important in many areas, including physiology, e.g. milk synthesis, ecology, e.g. soil retention time of water, and medical science, e.g. cancer diagnosis. So far, the isotopic turnover is quantified by applying time, cost and expertise intensive tracer experiments. Usually, this comprises two isotopic equilibration periods. A first equilibration period with a constant isotopic input signal is followed by a second equilibration period with a distinct constant isotopic input signal. This yields a smooth signal change from the first to the second signal in the object under consideration. This approach reveals at least three major problems. (i) The input signals must be controlled isotopically, which is almost impossible in many realistic cases like free ranging animals. (ii) Both equilibration periods may be very long, especially when the turnover rate of the object under consideration is very slow, which aggravates the first problem. (iii) The detection of small or slow pools is improved by large isotopic signal changes, but large isotopic changes also involve a considerable change in the input material; e.g. animal studies are usually carried out as diet-switch experiments, where the diet is switched between C3 and C4 plants, since C3 and C4 plants differ strongly in their isotopic signal. The

  10. Governance of occasional multi-sector networks

    NARCIS (Netherlands)

    Treurniet, W.; Logtenberg, R.; Groenewegen, P.

    2014-01-01

    Large-scale safety and security incidents typically require the coordinated effort of multiple organisations. A networked organisation is generally seen as the most appropriate structure for coordination within safety and security collaborations. Such networks generally are mixed-sector networks in

  11. Hello! Kids Network around the World.

    Science.gov (United States)

    Lynes, Kristine

    1996-01-01

    Describes Kids Network, an educational network available from the National Geographic Society that allows students in grades four through six to become part of research teams that include students from around the world. Computer hardware requirements and a list of Kids Network research questions are listed in a sidebar. (JMV)

  12. Quantification of water in hydrous ringwoodite

    Directory of Open Access Journals (Sweden)

    Sylvia-Monique eThomas

    2015-01-01

    Full Text Available Ringwoodite, γ-(Mg,Fe2SiO4, in the lower 150 km of Earth’s mantle transition zone (410-660 km depth can incorporate up to 1.5-2 wt% H2O as hydroxyl defects. We present a mineral-specific IR calibration for the absolute water content in hydrous ringwoodite by combining results from Raman spectroscopy, secondary ion mass spectrometery (SIMS and proton-proton (pp-scattering on a suite of synthetic Mg- and Fe-bearing hydrous ringwoodites. H2O concentrations in the crystals studied here range from 0.46 to 1.7 wt% H2O (absolute methods, with the maximum H2O in the same sample giving 2.5 wt% by SIMS calibration. Anchoring our spectroscopic results to absolute H-atom concentrations from pp-scattering measurements, we report frequency-dependent integrated IR-absorption coefficients for water in ringwoodite ranging from 78180 to 158880 L mol-1cm-2, depending upon frequency of the OH absorption. We further report a linear wavenumber IR calibration for H2O quantification in hydrous ringwoodite across the Mg2SiO4-Fe2SiO4 solid solution, which will lead to more accurate estimations of the water content in both laboratory-grown and naturally occurring ringwoodites. Re-evaluation of the IR spectrum for a natural hydrous ringwoodite inclusion in diamond from the study of Pearson et al. (2014 indicates the crystal contains 1.43 ± 0.27 wt% H2O, thus confirming near-maximum amounts of H2O for this sample from the transition zone.

  13. Quantification of nanowire uptake by live cells

    KAUST Repository

    Margineanu, Michael B.

    2015-05-01

    Nanostructures fabricated by different methods have become increasingly important for various applications at the cellular level. In order to understand how these nanostructures “behave” and for studying their internalization kinetics, several attempts have been made at tagging and investigating their interaction with living cells. In this study, magnetic iron nanowires with an iron oxide layer are coated with (3-Aminopropyl)triethoxysilane (APTES), and subsequently labeled with a fluorogenic pH-dependent dye pHrodo™ Red, covalently bound to the aminosilane surface. Time-lapse live imaging of human colon carcinoma HCT 116 cells interacting with the labeled iron nanowires is performed for 24 hours. As the pHrodo™ Red conjugated nanowires are non-fluorescent outside the cells but fluoresce brightly inside, internalized nanowires are distinguished from non-internalized ones and their behavior inside the cells can be tracked for the respective time length. A machine learning-based computational framework dedicated to automatic analysis of live cell imaging data, Cell Cognition, is adapted and used to classify cells with internalized and non-internalized nanowires and subsequently determine the uptake percentage by cells at different time points. An uptake of 85 % by HCT 116 cells is observed after 24 hours incubation at NW-to-cell ratios of 200. While the approach of using pHrodo™ Red for internalization studies is not novel in the literature, this study reports for the first time the utilization of a machine-learning based time-resolved automatic analysis pipeline for quantification of nanowire uptake by cells. This pipeline has also been used for comparison studies with nickel nanowires coated with APTES and labeled with pHrodo™ Red, and another cell line derived from the cervix carcinoma, HeLa. It has thus the potential to be used for studying the interaction of different types of nanostructures with potentially any live cell types.

  14. Fluorometric quantification of natural inorganic polyphosphate.

    Science.gov (United States)

    Diaz, Julia M; Ingall, Ellery D

    2010-06-15

    Polyphosphate, a linear polymer of orthophosphate, is abundant in the environment and a key component in wastewater treatment and many bioremediation processes. Despite the broad relevance of polyphosphate, current methods to quantify it possess significant disadvantages. Here, we describe a new approach for the direct quantification of inorganic polyphosphate in complex natural samples. The protocol relies on the interaction between the fluorochrome 4',6-diamidino-2-phenylindole (DAPI) and dissolved polyphosphate. With the DAPI-based approach we describe, polyphosphate can be quantified at concentrations ranging from 0.5-3 microM P in a neutral-buffered freshwater matrix with an accuracy of +/-0.03 microM P. The patterns of polyphosphate concentration versus fluorescence yielded by standards exhibit no chain length dependence across polyphosphates ranging from 15-130 phosphorus units in size. Shorter length polyphosphate molecules (e.g., polyphosphate of three and five phosphorus units in length) contribute little to no signal in this approach, as these molecules react only slightly or not at all with DAPI in the concentration range tested. The presence of salt suppresses fluorescence from intermediate polyphosphate chain lengths (e.g., 15 phosphorus units) at polyphosphate concentrations ranging from 0.5-3 microM P. For longer chain lengths (e.g., 45-130 phosphorus units), this salt interference is not evident at conductivities up to approximately 10mS/cm. Our results indicate that standard polyphosphates should be stored frozen for no longer than 10-15 days to avoid inconsistent results associated with standard degradation. We have applied the fluorometric protocol to the analysis of five well-characterized natural samples to demonstrate the use of the method.

  15. Optical Network Testbeds Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Joe Mambretti

    2007-06-01

    This is the summary report of the third annual Optical Networking Testbed Workshop (ONT3), which brought together leading members of the international advanced research community to address major challenges in creating next generation communication services and technologies. Networking research and development (R&D) communities throughout the world continue to discover new methods and technologies that are enabling breakthroughs in advanced communications. These discoveries are keystones for building the foundation of the future economy, which requires the sophisticated management of extremely large qualities of digital information through high performance communications. This innovation is made possible by basic research and experiments within laboratories and on specialized testbeds. Initial network research and development initiatives are driven by diverse motives, including attempts to solve existing complex problems, the desire to create powerful new technologies that do not exist using traditional methods, and the need to create tools to address specific challenges, including those mandated by large scale science or government agency mission agendas. Many new discoveries related to communications technologies transition to wide-spread deployment through standards organizations and commercialization. These transition paths allow for new communications capabilities that drive many sectors of the digital economy. In the last few years, networking R&D has increasingly focused on advancing multiple new capabilities enabled by next generation optical networking. Both US Federal networking R&D and other national R&D initiatives, such as those organized by the National Institute of Information and Communications Technology (NICT) of Japan are creating optical networking technologies that allow for new, powerful communication services. Among the most promising services are those based on new types of multi-service or hybrid networks, which use new optical networking

  16. Heterodox networks

    DEFF Research Database (Denmark)

    Lala, Purnima; Kumar, Ambuj

    2016-01-01

    It is imperative for the service providers to bring innovation in the network design to meet the exponential growth of mobile subscribers for multi-technology future wireless networks. As a matter of research, studies on providing services to moving subscriber groups aka ‘Place Time Capacity (PTC...

  17. Sensor networks

    NARCIS (Netherlands)

    Chatterjea, Supriyo; Thurston, J.; Kininmonth, S.; Havinga, Paul J.M.

    2006-01-01

    This article describes the details of a sensor network that is currently being deployed at the Great Barrier Reef in Australia. The sensor network allows scientists to retrieve sensor data that has a high spatial and temporal resolution. We give an overview of the energy-efficient data aggregation

  18. Network Protocols

    NARCIS (Netherlands)

    Tanenbaum, A.S.

    1981-01-01

    Dunng the last ten years, many computer networks have been designed, implemented, and put into service in the United States, Canada, Europe, Japan, and elsewhere. From the experience obtamed with these networks, certain key design principles have begun to emerge, principles that can be used to

  19. Probabilistic Networks

    DEFF Research Database (Denmark)

    Jensen, Finn Verner; Lauritzen, Steffen Lilholt

    2001-01-01

    This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs.......This article describes the basic ideas and algorithms behind specification and inference in probabilistic networks based on directed acyclic graphs, undirected graphs, and chain graphs....

  20. Organizational Networks

    DEFF Research Database (Denmark)

    Sørensen, Ole Henning; Grande, Bård

    1996-01-01

    The paper focuses on the concept of organizational networks. Four different uses of the concept are identified and critically discussed.......The paper focuses on the concept of organizational networks. Four different uses of the concept are identified and critically discussed....

  1. Affective Networks

    OpenAIRE

    Jodi Dean

    2010-01-01

    This article sets out the idea of affective networks as a constitutive feature of communicative capitalism. It explores the circulation of intensities in contemporary information and communication networks, arguing that this circulation should be theorized in terms of the psychoanalytic notion of the drive. The article includes critical engagements with theorists such as Guy Debord, Jacques Lacan, Tiziana Terranova, and Slavoj Zizek.

  2. Nano electrode arrays for in-situ identification and quantification of chemicals in water.

    Energy Technology Data Exchange (ETDEWEB)

    Gurule, Natalia J.; Kelly, Michael James; Brevnov, Dmitri A. (University of New Mexico, Albuquerque, NM); Ashby, Carol Iris Hill; Pfeifer, Kent Bryant; Yelton, William Graham

    2004-12-01

    The nano electrode arrays for in-situ identification and quantification of chemicals in water progress in four major directions. (1) We developed and engineering three nanoelectrode array designs which operate in a portable field mode or as distributed sensor network for water systems. (2) To replace the fragile glass electrochemical cells using in the lab, we design and engineered field-ready sampling heads that combine the nanoelectrode arrays with a high-speed potentiostat. (3) To utilize these arrays in a portable system we design and engineered a light weight high-speed potentiostat with pulse widths from 2 psec. to 100 msec. or greater. (4) Finally, we developed the parameters for an analytical method in low-conductivity solutions for Pb(II) detection, with initial studies for the analysis of As(III) and As(V) analysis in natural water sources.

  3. Modeling and Uncertainty Quantification of Vapor Sorption and Diffusion in Heterogeneous Polymers

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Yunwei [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Harley, Stephen J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Elizabeth A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-13

    A high-fidelity model of kinetic and equilibrium sorption and diffusion is developed and exercised. The gas-diffusion model is coupled with a triple-sorption mechanism: Henry’s law absorption, Langmuir adsorption, and pooling or clustering of molecules at higher partial pressures. Sorption experiments are conducted and span a range of relative humidities (0-95 %) and temperatures (30-60 °C). Kinetic and equilibrium sorption properties and effective diffusivity are determined by minimizing the absolute difference between measured and modeled uptakes. Uncertainty quantification and sensitivity analysis methods are described and exercised herein to demonstrate the capability of this modeling approach. Water uptake in silica-filled and unfilled poly(dimethylsiloxane) networks is investigated; however, the model is versatile enough to be used with a wide range of materials and vapors.

  4. A machine learning approach for efficient uncertainty quantification using multiscale methods

    Science.gov (United States)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  5. Ct shift: A novel and accurate real-time PCR quantification model for direct comparison of different nucleic acid sequences and its application for transposon quantifications.

    Science.gov (United States)

    Kolacsek, Orsolya; Pergel, Enikő; Varga, Nóra; Apáti, Ágota; Orbán, Tamás I

    2017-01-20

    There are numerous applications of quantitative PCR for both diagnostic and basic research. As in many other techniques the basis of quantification is that comparisons are made between different (unknown and known or reference) specimens of the same entity. When the aim is to compare real quantities of different species in samples, one cannot escape their separate precise absolute quantification. We have established a simple and reliable method for this purpose (Ct shift method) which combines the absolute and the relative approach. It requires a plasmid standard containing both sequences of amplicons to be compared (e.g. the target of interest and the endogenous control). It can serve as a reference sample with equal copies of templates for both targets. Using the ΔΔCt formula we can quantify the exact ratio of the two templates in each unknown sample. The Ct shift method has been successfully applied for transposon gene copy measurements, as well as for comparison of different mRNAs in cDNA samples. This study provides the proof of concept and introduces some potential applications of the method; the absolute nature of results even without the need for real reference samples can contribute to the universality of the method and comparability of different studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Methodological considerations in quantification of oncological FDG PET studies

    Energy Technology Data Exchange (ETDEWEB)

    Vriens, Dennis; Visser, Eric P.; Geus-Oei, Lioe-Fee de; Oyen, Wim J.G. [Radboud University Nijmegen Medical Centre, Department of Nuclear Medicine, Nijmegen (Netherlands)

    2010-07-15

    This review aims to provide insight into the factors that influence quantification of glucose metabolism by FDG PET images in oncology as well as their influence on repeated measures studies (i.e. treatment response assessment), offering improved understanding both for clinical practice and research. Structural PubMed searches have been performed for the many factors affecting quantification of glucose metabolism by FDG PET. Review articles and references lists have been used to supplement the search findings. Biological factors such as fasting blood glucose level, FDG uptake period, FDG distribution and clearance, patient motion (breathing) and patient discomfort (stress) all influence quantification. Acquisition parameters should be adjusted to maximize the signal to noise ratio without exposing the patient to a higher than strictly necessary radiation dose. This is especially challenging in pharmacokinetic analysis, where the temporal resolution is of significant importance. The literature is reviewed on the influence of attenuation correction on parameters for glucose metabolism, the effect of motion, metal artefacts and contrast agents on quantification of CT attenuation-corrected images. Reconstruction settings (analytical versus iterative reconstruction, post-reconstruction filtering and image matrix size) all potentially influence quantification due to artefacts, noise levels and lesion size dependency. Many region of interest definitions are available, but increased complexity does not necessarily result in improved performance. Different methods for the quantification of the tissue of interest can introduce systematic and random inaccuracy. This review provides an up-to-date overview of the many factors that influence quantification of glucose metabolism by FDG PET. (orig.) 3.

  7. Statistical quantification of methylation levels by next-generation sequencing.

    Directory of Open Access Journals (Sweden)

    Guodong Wu

    Full Text Available Recently, next-generation sequencing-based technologies have enabled DNA methylation profiling at high resolution and low cost. Methyl-Seq and Reduced Representation Bisulfite Sequencing (RRBS are two such technologies that interrogate methylation levels at CpG sites throughout the entire human genome. With rapid reduction of sequencing costs, these technologies will enable epigenotyping of large cohorts for phenotypic association studies. Existing quantification methods for sequencing-based methylation profiling are simplistic and do not deal with the noise due to the random sampling nature of sequencing and various experimental artifacts. Therefore, there is a need to investigate the statistical issues related to the quantification of methylation levels for these emerging technologies, with the goal of developing an accurate quantification method.In this paper, we propose two methods for Methyl-Seq quantification. The first method, the Maximum Likelihood estimate, is both conceptually intuitive and computationally simple. However, this estimate is biased at extreme methylation levels and does not provide variance estimation. The second method, based on bayesian hierarchical model, allows variance estimation of methylation levels, and provides a flexible framework to adjust technical bias in the sequencing process.We compare the previously proposed binary method, the Maximum Likelihood (ML method, and the bayesian method. In both simulation and real data analysis of Methyl-Seq data, the bayesian method offers the most accurate quantification. The ML method is slightly less accurate than the bayesian method. But both our proposed methods outperform the original binary method in Methyl-Seq. In addition, we applied these quantification methods to simulation data and show that, with sequencing depth above 40-300 (which varies with different tissue samples per cleavage site, Methyl-Seq offers a comparable quantification consistency as microarrays.

  8. Network chemistry, network toxicology, network informatics, and network behavioristics: A scientific outline

    OpenAIRE

    WenJun Zhang

    2016-01-01

    In present study, I proposed some new sciences: network chemistry, network toxicology, network informatics, and network behavioristics. The aims, scope and scientific foundation of these sciences are outlined.

  9. Automated quantification of epicardial adipose tissue using CT angiography: evaluation of a prototype software

    Energy Technology Data Exchange (ETDEWEB)

    Spearman, James V.; Silverman, Justin R.; Krazinski, Aleksander W.; Costello, Philip [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Meinel, Felix G.; Geyer, Lucas L. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Ludwig-Maximilians-University Hospital, Institute for Clinical Radiology, Munich (Germany); Schoepf, U.J. [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Apfaltrer, Paul [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University Medical Center Mannheim, Medical Faculty Mannheim - Heidelberg University, Institute of Clinical Radiology and Nuclear Medicine, Mannheim (Germany); Canstein, Christian [Siemens Medical Solutions USA, Inc., Malvern, PA (United States); De Cecco, Carlo Nicola [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' - Polo Pontino, Department of Radiological Sciences, Oncology and Pathology, Latina (Italy)

    2014-02-15

    This study evaluated the performance of a novel automated software tool for epicardial fat volume (EFV) quantification compared to a standard manual technique at coronary CT angiography (cCTA). cCTA data sets of 70 patients (58.6 ± 12.9 years, 33 men) were retrospectively analysed using two different post-processing software applications. Observer 1 performed a manual single-plane pericardial border definition and EFV{sub M} segmentation (manual approach). Two observers used a software program with fully automated 3D pericardial border definition and EFV{sub A} calculation (automated approach). EFV and time required for measuring EFV (including software processing time and manual optimization time) for each method were recorded. Intraobserver and interobserver reliability was assessed on the prototype software measurements. T test, Spearman's rho, and Bland-Altman plots were used for statistical analysis. The final EFV{sub A} (with manual border optimization) was strongly correlated with the manual axial segmentation measurement (60.9 ± 33.2 mL vs. 65.8 ± 37.0 mL, rho = 0.970, P < 0.001). A mean of 3.9 ± 1.9 manual border edits were performed to optimize the automated process. The software prototype required significantly less time to perform the measurements (135.6 ± 24.6 s vs. 314.3 ± 76.3 s, P < 0.001) and showed high reliability (ICC > 0.9). Automated EFV{sub A} quantification is an accurate and time-saving method for quantification of EFV compared to established manual axial segmentation methods. (orig.)

  10. Network Affordances

    DEFF Research Database (Denmark)

    Samson, Audrey; Soon, Winnie

    2015-01-01

    This paper examines the notion of network affordance within the context of network art. Building on Gibson's theory (Gibson, 1979) we understand affordance as the perceived and actual parameters of a thing. We expand on Gaver's affordance of predictability (Gaver, 1996) to include ecological...... and computational parameters of unpredictability. We illustrate the notion of unpredictability by considering four specific works that were included in a network art exhibiton, SPEED SHOW [2.0] Hong Kong. The paper discusses how the artworks are contingent upon the parameteric relations (Parisi, 2013......), of the network. We introduce network affordance as a dynamic framework that could articulate the experienced tension arising from the (visible) symbolic representation of computational processes and its hidden occurrences. We base our proposal on the experience of both organising the SPEED SHOW and participating...

  11. Cross-layer design in optical networks

    CERN Document Server

    Brandt-Pearce, Maïté; Demeester, Piet; Saradhi, Chava

    2013-01-01

    Optical networks have become an integral part of the communications infrastructure needed to support society’s demand for high-speed connectivity.  Cross-Layer Design in Optical Networks addresses topics in optical network design and analysis with a focus on physical-layer impairment awareness and network layer service requirements, essential for the implementation and management of robust scalable networks.  The cross-layer treatment includes bottom-up impacts of the physical and lambda layers, such as dispersion, noise, nonlinearity, crosstalk, dense wavelength packing, and wavelength line rates, as well as top-down approaches to handle physical-layer impairments and service requirements.

  12. Robustness of networks against cascading failures

    Science.gov (United States)

    Dou, Bing-Lin; Wang, Xue-Guang; Zhang, Shi-Yong

    2010-06-01

    Inspired by other related works, this paper proposes a non-linear load-capacity model against cascading failures, which is more suitable for real networks. The simulation was executed on the B-A scale-free network, E-R random network, Internet AS level network, and the power grid of the western United States. The results show that the model is feasible and effective. By studying the relationship between network cost and robustness, we find that the model can defend against cascading failures better and requires a lower investment cost when higher robustness is required.

  13. Calibration of a Sensor Array (an Electronic Tongue for Identification and Quantification of Odorants from Livestock Buildings

    Directory of Open Access Journals (Sweden)

    Jens Jørgen Lønsmann Iversen

    2007-01-01

    Full Text Available This contribution serves a dual purpose. The first purpose was to investigate the possibility of using a sensor array (an electronic tongue for on-line identification and quantification of key odorants representing a variety of chemical groups at two different acidities, pH 6 and 8. The second purpose was to simplify the electronic tongue by decreasing the number of electrodes from 14, which was the number of electrodes in the prototype. Different electrodes were used for identification and quantification of different key odorants. A total of eight electrodes were sufficient for identification and quantification in micromolar concentrations of the key odorants n-butyrate, ammonium and phenolate in test mixtures also containing iso-valerate, skatole and p-cresolate. The limited number of electrodes decreased the standard deviation and the relative standard deviation of triplicate measurements in comparison with the array comprising 14 electrodes. The electronic tongue was calibrated using 4 different test mixtures, each comprising 50 different combinations of key odorants in triplicates, a total of 600 measurements. Back propagation artificial neural network, partial least square and principal component analysis were used in the data analysis. The results indicate that the electronic tongue has a promising potential as an on- line sensor for odorants absorbed in the bioscrubber used in livestock buildings.

  14. Calibration of a Sensor Array (an Electronic Tongue) for Identification and Quantification of Odorants from Livestock Buildings

    Science.gov (United States)

    Abu-Khalaf, Nawaf; Iversen, Jens Jørgen Lønsmann

    2007-01-01

    This contribution serves a dual purpose. The first purpose was to investigate the possibility of using a sensor array (an electronic tongue) for on-line identification and quantification of key odorants representing a variety of chemical groups at two different acidities, pH 6 and 8. The second purpose was to simplify the electronic tongue by decreasing the number of electrodes from 14, which was the number of electrodes in the prototype. Different electrodes were used for identification and quantification of different key odorants. A total of eight electrodes were sufficient for identification and quantification in micromolar concentrations of the key odorants n-butyrate, ammonium and phenolate in test mixtures also containing iso-valerate, skatole and p-cresolate. The limited number of electrodes decreased the standard deviation and the relative standard deviation of triplicate measurements in comparison with the array comprising 14 electrodes. The electronic tongue was calibrated using 4 different test mixtures, each comprising 50 different combinations of key odorants in triplicates, a total of 600 measurements. Back propagation artificial neural network, partial least square and principal component analysis were used in the data analysis. The results indicate that the electronic tongue has a promising potential as an online sensor for odorants absorbed in the bioscrubber used in livestock buildings.

  15. A Multidomain Survivable Virtual Network Mapping Algorithm

    Directory of Open Access Journals (Sweden)

    Xiancui Xiao

    2017-01-01

    Full Text Available Although the existing networks are more often deployed in the multidomain environment, most of existing researches focus on single-domain networks and there are no appropriate solutions for the multidomain virtual network mapping problem. In fact, most studies assume that the underlying network can operate without any interruption. However, physical networks cannot ensure the normal provision of network services for external reasons and traditional single-domain networks have difficulties to meet user needs, especially for the high security requirements of the network transmission. In order to solve the above problems, this paper proposes a survivable virtual network mapping algorithm (IntD-GRC-SVNE that implements multidomain mapping in network virtualization. IntD-GRC-SVNE maps the virtual communication networks onto different domain networks and provides backup resources for virtual links which improve the survivability of the special networks. Simulation results show that IntD-GRC-SVNE can not only improve the survivability of multidomain communications network but also render the network load more balanced and greatly improve the network acceptance rate due to employment of GRC (global resource capacity.

  16. OPTIMAL NETWORK TOPOLOGY DESIGN

    Science.gov (United States)

    Yuen, J. H.

    1994-01-01

    This program was developed as part of a research study on the topology design and performance analysis for the Space Station Information System (SSIS) network. It uses an efficient algorithm to generate candidate network designs (consisting of subsets of the set of all network components) in increasing order of their total costs, and checks each design to see if it forms an acceptable network. This technique gives the true cost-optimal network, and is particularly useful when the network has many constraints and not too many components. It is intended that this new design technique consider all important performance measures explicitly and take into account the constraints due to various technical feasibilities. In the current program, technical constraints are taken care of by the user properly forming the starting set of candidate components (e.g. nonfeasible links are not included). As subsets are generated, they are tested to see if they form an acceptable network by checking that all requirements are satisfied. Thus the first acceptable subset encountered gives the cost-optimal topology satisfying all given constraints. The user must sort the set of "feasible" link elements in increasing order of their costs. The program prompts the user for the following information for each link: 1) cost, 2) connectivity (number of stations connected by the link), and 3) the stations connected by that link. Unless instructed to stop, the program generates all possible acceptable networks in increasing order of their total costs. The program is written only to generate topologies that are simply connected. Tests on reliability, delay, and other performance measures are discussed in the documentation, but have not been incorporated into the program. This program is written in PASCAL for interactive execution and has been implemented on an IBM PC series computer operating under PC DOS. The disk contains source code only. This program was developed in 1985.

  17. Quantification of submarine/intertidal groundwater discharge and nutrient loading from a lowland karst catchment

    Science.gov (United States)

    McCormack, T.; Gill, L. W.; Naughton, O.; Johnston, P. M.

    2014-11-01

    Submarine groundwater discharge (SGD) is now recognised to be a process of significant importance to coastal systems and is of increasing interest within oceanographic and hydrologic research communities. However, due to the inherent difficulty of measuring SGD accurately, its quantification at any particular location is a relatively slow process often involving multiple labour intensive methods. In this paper, the SGD occurring at Kinvara Bay, the outlet of a lowland karst catchment in Western Ireland, is estimated using a hydrological model of the karst aquifer and then further verified by means of a relatively simple salinity survey. Discharge at Kinvara predominantly occurs via two springs, Kinvara West (KW) which serves as the outlet of a major, primarily allogenically fed, karst conduit network and Kinvara East (KE) which discharges water from more diffuse/autogenic sources. Discharge from these springs occurs intertidally and as such, their flow rates cannot be measured using traditional methods. Using the hydrological model, flow rates from KW were seen to vary between 5 and 16 m3/s with a mean value of 8.7 m3/s. Through hydrochemical analysis, this estimated discharge was found to be supplemented by an additional 14-18% via sources not accounted for by the model. Mean discharge at KE was also estimated as approximately 2 m3/s, thus the total mean discharge from both Kinvara Springs was determined to be 11.9-12.3 m3/s. Overall, the range of discharge was found to be lower than previous studies have estimated (as these studies had no means of quantifying attenuation within the conduit network). Combining this discharge with nutrient concentrations from the springs, the nutrient loading from the springs into the bay was estimated as 1230 kg/day N and 24.3 kg/day P. This research illustrates the benefits of a numerical modelling approach to the quantification of SGD when used in the appropriate hydrological scenario.

  18. Software Defined Networking Demands on Software Technologies

    DEFF Research Database (Denmark)

    Galinac Grbac, T.; Caba, Cosmin Marius; Soler, José

    2015-01-01

    , we review this new approach to networking from an architectural point of view, and identify and discuss some critical quality issues that require new developments in software technologies. These issues we discuss along with use case scenarios. Here in this paper we aim to identify challenges......Software Defined Networking (SDN) is a networking approach based on a centralized control plane architecture with standardised interfaces between control and data planes. SDN enables fast configuration and reconfiguration of the network to enhance resource utilization and service performances....... This new approach enables a more dynamic and flexible network, which may adapt to user needs and application requirements. To this end, systemized solutions must be implemented in network software, aiming to provide secure network services that meet the required service performance levels. In this paper...

  19. Quantification of Forces During a Neurosurgical Procedure: A Pilot Study.

    Science.gov (United States)

    Gan, Liu Shi; Zareinia, Kourosh; Lama, Sanju; Maddahi, Yaser; Yang, Fang Wei; Sutherland, Garnette R

    2015-08-01

    Knowledge of tool-tissue interaction is mostly taught and learned in a qualitative manner because a means to quantify the technical aspects of neurosurgery is currently lacking. Neurosurgeons typically require years of hands-on experience, together with multiple initial trial and error, to master the optimal force needed during the performance of neurosurgical tasks. The aim of this pilot study was to develop a novel force-sensing bipolar forceps for neurosurgery and obtain preliminary data on specific tasks performed on cadaveric brains. A novel force-sensing bipolar forceps capable of measuring coagulation and dissection forces was designed and developed by installing strain gauges along the length of the bipolar forceps prongs. The forceps was used in 3 cadaveric brain experiments and forces applied by an experienced neurosurgeon for 10 surgical tasks across the 3 experiments were quantified. Maximal peak (effective) forces of 1.35 N and 1.16 N were observed for dissection (opening) and coagulation (closing) tasks, respectively. More than 70% of forces applied during the neurosurgical tasks were less than 0.3 N. Mean peak forces ranged between 0.10 N and 0.41 N for coagulation of scalp vessels and pia-arachnoid, respectively, and varied from 0.16 N for dissection of small cortical vessel to 0.65 N for dissection of the optic chiasm. The force-sensing bipolar forceps were able to successfully measure and record real-time tool-tissue interaction throughout the 3 experiments. This pilot study serves as a first step toward quantification of tool-tissue interaction forces in neurosurgery for training and improvement of instrument handling skills. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Echocardiographic chamber quantification in a healthy Dutch population.

    Science.gov (United States)

    van Grootel, R W J; Menting, M E; McGhie, J; Roos-Hesselink, J W; van den Bosch, A E

    2017-12-01

    For accurate interpretation of echocardiographic measurements normative data are required, which are provided by guidelines. For this article, the hypothesis was that these cannot be extrapolated to the Dutch population, since in Dutch clinical practice often higher values are found, which may not be pathological but physiological. Therefore this study aimed to 1) obtain and propose normative values for cardiac chamber quantification in a healthy Dutch population and 2) determine influences of baseline characteristics on these measurements. Prospectively recruited healthy subjects, aged 20-72 years (at least 28 subjects per age decade, equally distributed for gender) underwent physical examination and 2D and 3D echocardiography. Both ventricles and atria were assessed and volumes were calculated. 147 subjects were included (age 44 ± 14 years, 50% female). Overall, feasibility was good for both linear and volumetric measurements. Linear and volumetric parameters were consistently higher than current guidelines recommend, while functional parameters were in line with the guidelines. This was more so in the older population. 3D volumes were higher than 2D volumes. Gender dependency was seen in all body surface area (BSA) corrected volumes and with increasing age, ejection fractions decreased. This study provides 2D and 3D echocardiographic reference ranges for both ventricles and atria derived from a healthy Dutch population. BSA indexed volumes are gender-dependent, age did not influence ventricular volumes and a rise in blood pressure was independently associated with increased right ventricular volumes. The higher volumes found may be indicative for the Dutch population being the tallest in the world.