WorldWideScience

Sample records for source characterization protocol

  1. Surface characterization protocol for precision aspheric optics

    Science.gov (United States)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  2. Temporary Operational Protocol for making safe and managing Orphaned or Seized Radioactive Sources

    International Nuclear Information System (INIS)

    2013-01-01

    This protocol outlines the arrangements to manage the safe interim storage of an orphaned radioactive source or of a source identified for seizure, pending its ultimate disposal. Such sources may be sources found outside of regulatory control, detected at a frontier or seized in the public interest. This includes a radioactive source arising from a CBRN, chemical, biological, radiological, nuclear, incident, following neutralisation of any associated dispersal device and confirmation of the suspect object as radioactive. The arrangements in this protocol are meant to be consistent with and used in conjunction with relevant protocols to the Major Emergency Framework Document and may be revisited as necessary as those protocols are further developed

  3. Development of Characterization Protocol for Mixed Liquid Radioactive Waste Classification

    International Nuclear Information System (INIS)

    Norasalwa Zakaria; Syed Asraf Wafa; Wo, Y.M.; Sarimah Mahat; Mohamad Annuar Assadat Husain

    2017-01-01

    Mixed organic liquid waste generated from health-care and research activities containing tritium, carbon-14, and other radionuclide posed specific challenges in its management. Often, this waste becomes legacy waste in many nuclear facilities and being considered as 'problematic' waste. One of the most important recommendations made by IAEA is to perform multistage processes aiming at declassification of the waste. At this moment, approximately 3000 bottles of mixed liquid waste, with estimated volume of 6000 litres are currently stored at the National Radioactive Waste Management Centre, Malaysia and some have been stored for more than 25 years. The aim of this study is to develop a characterization protocol towards reclassification of these wastes. The characterization protocol entails waste identification, waste screening and segregation, and analytical radionuclides profiling using analytical procedures involving gross alpha beta, and gamma spectrometry. The results obtained from the characterization protocol are used to establish criteria for speedy classification of the waste. (author)

  4. Development of characterization protocol for mixed liquid radioactive waste classification

    Energy Technology Data Exchange (ETDEWEB)

    Zakaria, Norasalwa, E-mail: norasalwa@nuclearmalaysia.gov.my [Waste Technology Development Centre, Malaysian Nuclear Agency, 43000 Kajang, Selangor (Malaysia); Wafa, Syed Asraf [Radioisotop Technology and Innovation, Malaysian Nuclear Agency, 43000 Kajang, Selangor (Malaysia); Wo, Yii Mei [Radiochemistry and Environment, Malaysian Nuclear Agency, 43000 Kajang, Selangor (Malaysia); Mahat, Sarimah [Material Technology Group, Malaysian Nuclear Agency, 43000 Kajang, Selangor (Malaysia)

    2015-04-29

    Mixed liquid organic waste generated from health-care and research activities containing tritium, carbon-14, and other radionuclides posed specific challenges in its management. Often, these wastes become legacy waste in many nuclear facilities and being considered as ‘problematic’ waste. One of the most important recommendations made by IAEA is to perform multistage processes aiming at declassification of the waste. At this moment, approximately 3000 bottles of mixed liquid waste, with estimated volume of 6000 litres are currently stored at the National Radioactive Waste Management Centre, Malaysia and some have been stored for more than 25 years. The aim of this study is to develop a characterization protocol towards reclassification of these wastes. The characterization protocol entails waste identification, waste screening and segregation, and analytical radionuclides profiling using various analytical procedures including gross alpha/ gross beta, gamma spectrometry, and LSC method. The results obtained from the characterization protocol are used to establish criteria for speedy classification of the waste.

  5. Protocol of source shielding maintenance in a level measurement systems

    International Nuclear Information System (INIS)

    Gonzales, E.; Figueroa, J.

    1996-01-01

    Maintenance labor of the source shielding and locking system is not performed in many Venezuelan enterprises that employ radioactive level gauge in large container. The lack of maintenance and the ambient long lasting action have produced impairment of many devices and their given parts rise to economical and radiological protection problems. In order to help to solve the mentioned problems, principally to reduce the unjustified dose to workers, the IVIC Health Physics Service worked out a protocol to perform, in a safety way, the maintenance of source shielding and its locking system. This protocol is presented in this paper. (authors)

  6. Applications of the Italian protocol for the calibration of brachytherapy sources

    International Nuclear Information System (INIS)

    Piermattei, A.; Azario, L.

    1997-01-01

    The Associazione Italiana di Fisica Biomedica (AIFB; Italian Association of Biomedical Physics) has adopted the Italian protocol for the calibration of brachytherapy sources. The AIFB protocol allows measurements of the reference air kerma rate, dK/dt r , within 1.7% (1σ). To measure dK/dt r the AIFB protocol has identified a direct and an indirect procedure. The direct procedure is based on the use of spherical or cylindrical ionization chambers as local reference dosimeters positioned along the transverse bisector axis of the source. Once the source is specified by a dK/dt r value, this can be used to calibrate a field instrument, such as a well-type ionization chamber, for further source calibrations by means of an indirect procedure. This paper reports the results obtained by the Physics Laboratory of the Universita Cattolica del S Cuore (PL-UCSC), in terms of dK/dt r calibration of five types of source, 169 Yb, 192 Ir and 137 Cs. The role of the dK/dt r determination for a brachytherapy source has been underlined when a new source such as the 169 Yb seed model X1267 has been proposed for clinical use. The dK/dt r values for 137 Cs spherical sources differed by 5% from the vendor's mean value. The five types of source calibrated in terms of dK/dt r were used to obtain the calibration factor, N K r source , of an HDR-1000 well-type ionization chamber. (author)

  7. SAVAH: Source Address Validation with Host Identity Protocol

    Science.gov (United States)

    Kuptsov, Dmitriy; Gurtov, Andrei

    Explosive growth of the Internet and lack of mechanisms that validate the authenticity of a packet source produced serious security and accounting issues. In this paper, we propose validating source addresses in LAN using Host Identity Protocol (HIP) deployed in a first-hop router. Compared to alternative solutions such as CGA, our approach is suitable both for IPv4 and IPv6. We have implemented SAVAH in Wi-Fi access points and evaluated its overhead for clients and the first-hop router.

  8. A Source Anonymity-Based Lightweight Secure AODV Protocol for Fog-Based MANET.

    Science.gov (United States)

    Fang, Weidong; Zhang, Wuxiong; Xiao, Jinchao; Yang, Yang; Chen, Wei

    2017-06-17

    Fog-based MANET (Mobile Ad hoc networks) is a novel paradigm of a mobile ad hoc network with the advantages of both mobility and fog computing. Meanwhile, as traditional routing protocol, ad hoc on-demand distance vector (AODV) routing protocol has been applied widely in fog-based MANET. Currently, how to improve the transmission performance and enhance security are the two major aspects in AODV's research field. However, the researches on joint energy efficiency and security seem to be seldom considered. In this paper, we propose a source anonymity-based lightweight secure AODV (SAL-SAODV) routing protocol to meet the above requirements. In SAL-SAODV protocol, source anonymous and secure transmitting schemes are proposed and applied. The scheme involves the following three parts: the source anonymity algorithm is employed to achieve the source node, without being tracked and located; the improved secure scheme based on the polynomial of CRC-4 is applied to substitute the RSA digital signature of SAODV and guarantee the data integrity, in addition to reducing the computation and energy consumption; the random delayed transmitting scheme (RDTM) is implemented to separate the check code and transmitted data, and achieve tamper-proof results. The simulation results show that the comprehensive performance of the proposed SAL-SAODV is a trade-off of the transmission performance, energy efficiency, and security, and better than AODV and SAODV.

  9. Protocol for VOC-Arid ID remediation performance characterization

    International Nuclear Information System (INIS)

    Tegner, B.J.; Hassig, N.L.; Last, G.V.

    1994-09-01

    The Volatile Organic Compound-Arid Integrated Demonstration (VOC-Arid ID) is a technology development program sponsored by the US Department of Energy's Office of Technology Development that is targeted to acquire, develop, demonstrate, and deploy new technologies for the remediation of VOC contaminants in the soils and groundwaters of arid DOE sites. Technologies cannot be adequately evaluated unless sufficient site characterization and technology performance data have been collection and analyzed. The responsibility for identifying these data needs has been placed largely on the Principal Investigators (PIs) developing the remediation technology, who usually are not experts in site characterization or in identification of appropriate sampling, analysis, and monitoring techniques to support the field testing. This document provides a protocol for planning the collection of data before, during, and after a test of a new technology. This generic protocol provides the PIs and project managers with a set of steps to follow. The protocol is based on a data collection planning process called the Data Quality Objectives (DQO) process, which was originally developed by the US Environmental Protection Agency and has been expanded by DOE to support site cleanup decisions. The DQO process focuses on the quality and quantity of data required to make decision. Stakeholders to the decisions must negotiate such key inputs to the process as the decision rules that will be used and the acceptable probabilities of making decision errors

  10. XPS Protocol for the Characterization of Pristine and Functionalized Single Wall Carbon Nanotubes

    Science.gov (United States)

    Sosa, E. D.; Allada, R.; Huffman, C. B.; Arepalli, S.

    2009-01-01

    Recent interest in developing new applications for carbon nanotubes (CNT) has fueled the need to use accurate macroscopic and nanoscopic techniques to characterize and understand their chemistry. X-ray photoelectron spectroscopy (XPS) has proved to be a useful analytical tool for nanoscale surface characterization of materials including carbon nanotubes. Recent nanotechnology research at NASA Johnson Space Center (NASA-JSC) helped to establish a characterization protocol for quality assessment for single wall carbon nanotubes (SWCNTs). Here, a review of some of the major factors of the XPS technique that can influence the quality of analytical data, suggestions for methods to maximize the quality of data obtained by XPS, and the development of a protocol for XPS characterization as a complementary technique for analyzing the purity and surface characteristics of SWCNTs is presented. The XPS protocol is then applied to a number of experiments including impurity analysis and the study of chemical modifications for SWCNTs.

  11. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4, Volume IV: Inherent Optical Properties: Instruments, Characterizations, Field Measurements and Data Analysis Protocols

    Science.gov (United States)

    Mueller, J. L.; Fargion, G. S.; McClain, C. R. (Editor); Pegau, S.; Zanefeld, J. R. V.; Mitchell, B. G.; Kahru, M.; Wieland, J.; Stramska, M.

    2003-01-01

    This document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparision and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. The document is organized into 6 separate volumes as Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 4. Volume I: Introduction, Background, and Conventions; Volume II: Instrument Specifications, Characterization and Calibration; Volume III: Radiometric Measurements and Data Analysis Methods; Volume IV: Inherent Optical Properties: Instruments, Characterization, Field Measurements and Data Analysis Protocols; Volume V: Biogeochemical and Bio-Optical Measurements and Data Analysis Methods; Volume VI: Special Topics in Ocean Optics Protocols and Appendices. The earlier version of Ocean Optics Protocols for Satellite Ocean Color Sensor Validation, Revision 3 is entirely superseded by the six volumes of Revision 4 listed above.

  12. MANET Performance for Source and Destination Moving Scenarios Considering OLSR and AODV protocols

    Directory of Open Access Journals (Sweden)

    Elis Kulla

    2010-01-01

    Full Text Available Recently, a great interest is shown in MANETs potential usage and applications in several fields such as military activities, rescue operations and time-critical applications. In this work, we implement and analyse a MANET testbed considering AODV and OLSR protocols for wireless multi-hop networking. We investigate the effect of mobility and topology changing in MANET and evaluate the performance of the network through experiments in a real environment. The performance assessment of our testbed is done considering throughput, number of dropped packets and delay. We designed four scenarios: Static, Source Moving, Destination Moving and Source-Destination Moving. From our experimental results, we concluded that when the communicating nodes are moving and the routes change quickly, OLSR (as a proactive protocol performs better than AODV, which is a reactive protocol.

  13. Characterization of HDR Ir-192 source for 3D planning system

    International Nuclear Information System (INIS)

    Fonseca, Gabriel P.; Yoriyaz, Helio; Antunes, Paula C.G.; Siqueira, Paulo T.D.; Rubo, Rodrigo; Ferreira, Louise A.

    2011-01-01

    Brachytherapy treatment involves surgical or cavitary insertion of radioactive sources for diseases treatments, such as: lung, gynecologic or prostate cancer. This technique has great ability to administer high doses to the tumor, with adjacent normal tissue preservation equal or better than external beam radiation therapy. Several innovations have been incorporated in this treatment technique, such as, 3D treatment planning system and computer guided sources. In detriment to scientific advances there are no protocols that relate dose with tumor volume, organs or A point, established by ICRU38 and used to prescribe dose in treatment planning system. Several international studies, like as EMBRACE, the multicentre international study, has been trying to correlate the dose volume using 3D planning systems and medical images, as those obtained by CT or MRI, to establish treatment protocols. With the objective of analyzing the 3D dose distribution, a micro Selectron-HDR remote afterloading device for high dose-rate (HDR) was characterized in the present work. Through the data provided by the manufacturer the source was simulated, using the MCNP5 code to calculate American Association of Physicists in Medicine Task Group No. 43 report (AAPM TG43) specified parameters. The simulations have shown great agreement when compared to the ONCENTRA planning system results and those provided by literature. The micro Selectron-HDR remote afterloading device will be utilized to simulate 3D dose distribution through CT images processed by an auxiliary software which process DICOM images. (author)

  14. Characterization of HDR Ir-192 source for 3D planning system

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Gabriel P.; Yoriyaz, Helio; Antunes, Paula C.G.; Siqueira, Paulo T.D., E-mail: gabriel.fonseca@usp.b, E-mail: hyoriyaz@ipen.b, E-mail: ptsiquei@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Rubo, Rodrigo [Universidade de Sao Paulo (HC/FMUSP), Sao Paulo, SP (Brazil). Hospital das Clinicas. Servico de Radioterapia; Minamisawa, Renato A., E-mail: renato.minamisawa@psi.c [Paul Scherrer Institut (PSI), Villigen (Switzerland); Ferreira, Louise A. [Universidade Estadual de Maringa (UEM), PR (Brazil). Fac. de Medicina

    2011-07-01

    Brachytherapy treatment involves surgical or cavitary insertion of radioactive sources for diseases treatments, such as: lung, gynecologic or prostate cancer. This technique has great ability to administer high doses to the tumor, with adjacent normal tissue preservation equal or better than external beam radiation therapy. Several innovations have been incorporated in this treatment technique, such as, 3D treatment planning system and computer guided sources. In detriment to scientific advances there are no protocols that relate dose with tumor volume, organs or A point, established by ICRU38 and used to prescribe dose in treatment planning system. Several international studies, like as EMBRACE, the multicentre international study, has been trying to correlate the dose volume using 3D planning systems and medical images, as those obtained by CT or MRI, to establish treatment protocols. With the objective of analyzing the 3D dose distribution, a micro Selectron-HDR remote afterloading device for high dose-rate (HDR) was characterized in the present work. Through the data provided by the manufacturer the source was simulated, using the MCNP5 code to calculate American Association of Physicists in Medicine Task Group No. 43 report (AAPM TG43) specified parameters. The simulations have shown great agreement when compared to the ONCENTRA planning system results and those provided by literature. The micro Selectron-HDR remote afterloading device will be utilized to simulate 3D dose distribution through CT images processed by an auxiliary software which process DICOM images. (author)

  15. The Chandra Source Catalog: Statistical Characterization

    Science.gov (United States)

    Primini, Francis A.; Nowak, M. A.; Houck, J. C.; Davis, J. E.; Glotfelty, K. J.; Karovska, M.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) will ultimately contain more than ˜250000 x-ray sources in a total area of ˜1% of the entire sky, using data from ˜10000 separate ACIS and HRC observations of a multitude of different types of x-ray sources (see Evans et al. this conference). In order to maximize the scientific benefit of such a large, heterogeneous dataset, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Our Characterization efforts include both extensive simulations of blank-sky and point source datasets, and detailed comparisons of CSC results with those of other x-ray and optical catalogs. We present here a summary of our characterization results for CSC Release 1 and preliminary plans for future releases. This work is supported by NASA contract NAS8-03060 (CXC).

  16. Initial Characterization of Optical Communications with Disruption-Tolerant Network Protocols

    Science.gov (United States)

    Schoolcraft, Joshua; Wilson, Keith

    2011-01-01

    Disruption-tolerant networks (DTNs) are groups of network assets connected with a suite of communication protocol technologies designed to mitigate the effects of link delay and disruption. Application of DTN protocols to diverse groups of network resources in multiple sub-networks results in an overlay network-of-networks with autonomous data routing capability. In space environments where delay or disruption is expected, performance of this type of architecture (such as an interplanetary internet) can increase with the inclusion of new communications mediums and techniques. Space-based optical communication links are therefore an excellent building block of space DTN architectures. When compared to traditional radio frequency (RF) communications, optical systems can provide extremely power-efficient and high bandwidth links bridging sub-networks. Because optical links are more susceptible to link disruption and experience the same light-speed delays as RF, optical-enabled DTN architectures can lessen potential drawbacks and maintain the benefits of autonomous optical communications over deep space distances. These environment-driven expectations - link delay and interruption, along with asymmetric data rates - are the purpose of the proof-of-concept experiment outlined herein. In recognizing the potential of these two technologies, we report an initial experiment and characterization of the performance of a DTN-enabled space optical link. The experiment design employs a point-to-point free-space optical link configured to have asymmetric bandwidth. This link connects two networked systems running a DTN protocol implementation designed and written at JPL for use on spacecraft, and further configured for higher bandwidth performance. Comparing baseline data transmission metrics with and without periodic optical link interruptions, the experiment confirmed the DTN protocols' ability to handle real-world unexpected link outages while maintaining capability of

  17. A streamlined ribosome profiling protocol for the characterization of microorganisms

    DEFF Research Database (Denmark)

    Latif, Haythem; Szubin, Richard; Tan, Justin

    2015-01-01

    Ribosome profiling is a powerful tool for characterizing in vivo protein translation at the genome scale, with multiple applications ranging from detailed molecular mechanisms to systems-level predictive modeling. Though highly effective, this intricate technique has yet to become widely used...... in the microbial research community. Here we present a streamlined ribosome profiling protocol with reduced barriers to entry for microbial characterization studies. Our approach provides simplified alternatives during harvest, lysis, and recovery of monosomes and also eliminates several time-consuming steps...

  18. Statistical Characterization of the Chandra Source Catalog

    Science.gov (United States)

    Primini, Francis A.; Houck, John C.; Davis, John E.; Nowak, Michael A.; Evans, Ian N.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G.; Grier, John D.; Hain, Roger M.; Hall, Diane M.; Harbo, Peter N.; He, Xiangqun Helen; Karovska, Margarita; Kashyap, Vinay L.; Lauer, Jennifer; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph B.; Mitschang, Arik W.; Morgan, Douglas L.; Mossman, Amy E.; Nichols, Joy S.; Plummer, David A.; Refsdal, Brian L.; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Tibbetts, Michael S.; Van Stone, David W.; Winkelman, Sherry L.; Zografou, Panagoula

    2011-06-01

    The first release of the Chandra Source Catalog (CSC) contains ~95,000 X-ray sources in a total area of 0.75% of the entire sky, using data from ~3900 separate ACIS observations of a multitude of different types of X-ray sources. In order to maximize the scientific benefit of such a large, heterogeneous data set, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Characterization efforts of other large Chandra catalogs, such as the ChaMP Point Source Catalog or the 2 Mega-second Deep Field Surveys, while informative, cannot serve this purpose, since the CSC analysis procedures are significantly different and the range of allowable data is much less restrictive. We describe here the characterization process for the CSC. This process includes both a comparison of real CSC results with those of other, deeper Chandra catalogs of the same targets and extensive simulations of blank-sky and point-source populations.

  19. STATISTICAL CHARACTERIZATION OF THE CHANDRA SOURCE CATALOG

    International Nuclear Information System (INIS)

    Primini, Francis A.; Evans, Ian N.; Glotfelty, Kenny J.; Anderson, Craig S.; Bonaventura, Nina R.; Chen, Judy C.; Doe, Stephen M.; Evans, Janet D.; Fabbiano, Giuseppina; Galle, Elizabeth C.; Gibbs, Danny G.; Grier, John D.; Hain, Roger M.; Harbo, Peter N.; He Xiangqun; Karovska, Margarita; Houck, John C.; Davis, John E.; Nowak, Michael A.; Hall, Diane M.

    2011-01-01

    The first release of the Chandra Source Catalog (CSC) contains ∼95,000 X-ray sources in a total area of 0.75% of the entire sky, using data from ∼3900 separate ACIS observations of a multitude of different types of X-ray sources. In order to maximize the scientific benefit of such a large, heterogeneous data set, careful characterization of the statistical properties of the catalog, i.e., completeness, sensitivity, false source rate, and accuracy of source properties, is required. Characterization efforts of other large Chandra catalogs, such as the ChaMP Point Source Catalog or the 2 Mega-second Deep Field Surveys, while informative, cannot serve this purpose, since the CSC analysis procedures are significantly different and the range of allowable data is much less restrictive. We describe here the characterization process for the CSC. This process includes both a comparison of real CSC results with those of other, deeper Chandra catalogs of the same targets and extensive simulations of blank-sky and point-source populations.

  20. Source characterization of Purnima Neutron Generator (PNG)

    International Nuclear Information System (INIS)

    Bishnoi, Saroj; Patel, T.; Paul, Ram K.; Sarkar, P.S.; Adhikari, P.S.; Sinha, Amar

    2011-01-01

    The use of 14.1 MeV neutron generators for the applications such as elemental analysis, Accelerated Driven System (ADS) study, fast neutron radiography requires the characterization of neutron source i.e neutron yield (emission rate in n/sec), neutron dose, beam spot size and energy spectrum. In this paper, a series of experiments carried out to characterize this neutron source. The neutron source has been quantified with neutron emission rate, neutron dose at various source strength and beam spot size at target position

  1. Protocols for the analytical characterization of therapeutic monoclonal antibodies. II - Enzymatic and chemical sample preparation.

    Science.gov (United States)

    Bobaly, Balazs; D'Atri, Valentina; Goyon, Alexandre; Colas, Olivier; Beck, Alain; Fekete, Szabolcs; Guillarme, Davy

    2017-08-15

    The analytical characterization of therapeutic monoclonal antibodies and related proteins usually incorporates various sample preparation methodologies. Indeed, quantitative and qualitative information can be enhanced by simplifying the sample, thanks to the removal of sources of heterogeneity (e.g. N-glycans) and/or by decreasing the molecular size of the tested protein by enzymatic or chemical fragmentation. These approaches make the sample more suitable for chromatographic and mass spectrometric analysis. Structural elucidation and quality control (QC) analysis of biopharmaceutics are usually performed at intact, subunit and peptide levels. In this paper, general sample preparation approaches used to attain peptide, subunit and glycan level analysis are overviewed. Protocols are described to perform tryptic proteolysis, IdeS and papain digestion, reduction as well as deglycosylation by PNGase F and EndoS2 enzymes. Both historical and modern sample preparation methods were compared and evaluated using rituximab and trastuzumab, two reference therapeutic mAb products approved by Food and Drug Administration (FDA) and European Medicines Agency (EMA). The described protocols may help analysts to develop sample preparation methods in the field of therapeutic protein analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. PyPedia: using the wiki paradigm as crowd sourcing environment for bioinformatics protocols.

    Science.gov (United States)

    Kanterakis, Alexandros; Kuiper, Joël; Potamias, George; Swertz, Morris A

    2015-01-01

    Today researchers can choose from many bioinformatics protocols for all types of life sciences research, computational environments and coding languages. Although the majority of these are open source, few of them possess all virtues to maximize reuse and promote reproducible science. Wikipedia has proven a great tool to disseminate information and enhance collaboration between users with varying expertise and background to author qualitative content via crowdsourcing. However, it remains an open question whether the wiki paradigm can be applied to bioinformatics protocols. We piloted PyPedia, a wiki where each article is both implementation and documentation of a bioinformatics computational protocol in the python language. Hyperlinks within the wiki can be used to compose complex workflows and induce reuse. A RESTful API enables code execution outside the wiki. Initial content of PyPedia contains articles for population statistics, bioinformatics format conversions and genotype imputation. Use of the easy to learn wiki syntax effectively lowers the barriers to bring expert programmers and less computer savvy researchers on the same page. PyPedia demonstrates how wiki can provide a collaborative development, sharing and even execution environment for biologists and bioinformaticians that complement existing resources, useful for local and multi-center research teams. PyPedia is available online at: http://www.pypedia.com. The source code and installation instructions are available at: https://github.com/kantale/PyPedia_server. The PyPedia python library is available at: https://github.com/kantale/pypedia. PyPedia is open-source, available under the BSD 2-Clause License.

  3. Abbreviated Combined MR Protocol: A New Faster Strategy for Characterizing Breast Lesions.

    Science.gov (United States)

    Moschetta, Marco; Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe

    2016-06-01

    The use of an abbreviated magnetic resonance (MR) protocol has been recently proposed for cancer screening. The aim of our study is to evaluate the diagnostic accuracy of an abbreviated MR protocol combining short TI inversion recovery (STIR), turbo-spin-echo (TSE)-T2 sequences, a pre-contrast T1, and a single intermediate (3 minutes after contrast injection) post-contrast T1 sequence for characterizing breast lesions. A total of 470 patients underwent breast MR examination for screening, problem solving, or preoperative staging. Two experienced radiologists evaluated both standard and abbreviated protocols in consensus. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for both protocols were calculated (with the histological findings and 6-month ultrasound follow-up as the reference standard) and compared with the McNemar test. The post-processing and interpretation times for the MR images were compared with the paired t test. In 177 of 470 (38%) patients, the MR sequences detected 185 breast lesions. Standard and abbreviated protocols obtained sensitivity, specificity, diagnostic accuracy, PPV, and NPV values respectively of 92%, 92%, 92%, 68%, and 98% and of 89%, 91%, 91%, 64%, and 98% with no statistically significant difference (P < .0001). The mean post-processing and interpretation time were, respectively, 7 ± 1 minutes and 6 ± 3.2 minutes for the standard protocol and 1 ± 1.2 minutes and 2 ± 1.2 minutes for the abbreviated protocol, with a statistically significant difference (P < .01). An abbreviated combined MR protocol represents a time-saving tool for radiologists and patients with the same diagnostic potential as the standard protocol in patients undergoing breast MRI for screening, problem solving, or preoperative staging. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. RELIABLE DYNAMIC SOURCE ROUTING PROTOCOL (RDSRP FOR ENERGY HARVESTING WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    B. Narasimhan

    2015-03-01

    Full Text Available Wireless sensor networks (WSNs carry noteworthy pros over traditional communication. Though, unkind and composite environments fake great challenges in the reliability of WSN communications. It is more vital to develop a reliable unipath dynamic source routing protocol (RDSRPl for WSN to provide better quality of service (QoS in energy harvesting wireless sensor networks (EH-WSN. This paper proposes a dynamic source routing approach for attaining the most reliable route in EH-WSNs. Performance evaluation is carried out using NS-2 and throughput and packet delivery ratio are chosen as the metrics.

  5. Microplastics in seafood: Benchmark protocol for their extraction and characterization.

    Science.gov (United States)

    Dehaut, Alexandre; Cassone, Anne-Laure; Frère, Laura; Hermabessiere, Ludovic; Himber, Charlotte; Rinnert, Emmanuel; Rivière, Gilles; Lambert, Christophe; Soudant, Philippe; Huvet, Arnaud; Duflos, Guillaume; Paul-Pont, Ika

    2016-08-01

    Pollution of the oceans by microplastics (studies have investigated the level of contamination of marine organisms collected in situ. For extraction and characterization of microplastics in biological samples, the crucial step is the identification of solvent(s) or chemical(s) that efficiently dissolve organic matter without degrading plastic polymers for their identification in a time and cost effective way. Most published papers, as well as OSPAR recommendations for the development of a common monitoring protocol for plastic particles in fish and shellfish at the European level, use protocols containing nitric acid to digest the biological tissues, despite reports of polyamide degradation with this chemical. In the present study, six existing approaches were tested and their effects were compared on up to 15 different plastic polymers, as well as their efficiency in digesting biological matrices. Plastic integrity was evaluated through microscopic inspection, weighing, pyrolysis coupled with gas chromatography and mass spectrometry, and Raman spectrometry before and after digestion. Tissues from mussels, crabs and fish were digested before being filtered on glass fibre filters. Digestion efficiency was evaluated through microscopical inspection of the filters and determination of the relative removal of organic matter content after digestion. Five out of the six tested protocols led to significant degradation of plastic particles and/or insufficient tissue digestion. The protocol using a KOH 10% solution and incubation at 60 °C during a 24 h period led to an efficient digestion of biological tissues with no significant degradation on all tested polymers, except for cellulose acetate. This protocol appeared to be the best compromise for extraction and later identification of microplastics in biological samples and should be implemented in further monitoring studies to ensure relevance and comparison of environmental and seafood product quality studies

  6. Thoraco-abdominal high-pitch dual-source CT angiography: Experimental evaluation of injection protocols with an anatomical human vascular phantom

    Energy Technology Data Exchange (ETDEWEB)

    Puippe, Gilbert D., E-mail: gilbert.puippe@usz.ch [Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, Switzerland Raemistrasse 100, CH-8091 Zurich (Switzerland); Winklehner, Anna [Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, Switzerland Raemistrasse 100, CH-8091 Zurich (Switzerland); Hasenclever, Peter; Plass, André [Division of Cardiac and Vascular Surgery, University Hospital Zurich, Switzerland Raemistrasse 100, CH-8091 Zurich (Switzerland); Frauenfelder, Thomas; Baumueller, Stephan [Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, Switzerland Raemistrasse 100, CH-8091 Zurich (Switzerland)

    2012-10-15

    Objective: To experimentally evaluate three different contrast injection protocols at thoraco-abdominal high-pitch dual-source computed tomography angiography (CTA), with regard to level and homogeneity of vascular enhancement at different cardiac outputs. Materials and methods: A uniphasic, a biphasic as well as an individually tailored contrast protocol were tested using a human vascular phantom. Each protocol was scanned at 5 different cardiac outputs (3–5 L/min, steps of 0.5 L/min) using an extracorporeal cardiac pump. Vascular enhancement of the thoraco-abdominal aorta was measured every 5 cm. Overall mean enhancement of each protocol and mean enhancement for each cardiac output within each protocol were calculated. Enhancement homogeneity along the z-axis was evaluated for each cardiac output and protocol. Results: Overall mean enhancement was significantly higher in the uniphasic than in the other two protocols (all p < .05), whereas the difference between the biphasic and tailored protocol was not significant (p = .76). Mean enhancement among each of the 5 cardiac outputs within each protocol was significantly different (all p < .05). Only within the tailored protocol mean enhancement differed not significantly at cardiac outputs of 3.5 L/min vs. 5 L/min (484 ± 25 HU vs. 476 ± 19 HU, p = .14) and 4 vs. 5 L/min (443 ± 49 HU vs. 476 ± 19 HU, p = .05). Both, uniphasic and tailored protocol yielded homogenous enhancement at all cardiac outputs, whereas the biphasic protocol failed to achieve homogenous enhancement. Conclusion: This phantom study suggests that diagnostic and homogenous enhancement at thoraco-abdominal high-pitch dual-source CTA is feasible with either a uniphasic or an individually tailored contrast protocol.

  7. Characterizing College Science Assessments: The Three-Dimensional Learning Assessment Protocol

    Science.gov (United States)

    Underwood, Sonia M.; Matz, Rebecca L.; Posey, Lynmarie A.; Carmel, Justin H.; Caballero, Marcos D.; Fata-Hartley, Cori L.; Ebert-May, Diane; Jardeleza, Sarah E.; Cooper, Melanie M.

    2016-01-01

    Many calls to improve science education in college and university settings have focused on improving instructor pedagogy. Meanwhile, science education at the K-12 level is undergoing significant changes as a result of the emphasis on scientific and engineering practices, crosscutting concepts, and disciplinary core ideas. This framework of “three-dimensional learning” is based on the literature about how people learn science and how we can help students put their knowledge to use. Recently, similar changes are underway in higher education by incorporating three-dimensional learning into college science courses. As these transformations move forward, it will become important to assess three-dimensional learning both to align assessments with the learning environment, and to assess the extent of the transformations. In this paper we introduce the Three-Dimensional Learning Assessment Protocol (3D-LAP), which is designed to characterize and support the development of assessment tasks in biology, chemistry, and physics that align with transformation efforts. We describe the development process used by our interdisciplinary team, discuss the validity and reliability of the protocol, and provide evidence that the protocol can distinguish between assessments that have the potential to elicit evidence of three-dimensional learning and those that do not. PMID:27606671

  8. Characterizing College Science Assessments: The Three-Dimensional Learning Assessment Protocol.

    Science.gov (United States)

    Laverty, James T; Underwood, Sonia M; Matz, Rebecca L; Posey, Lynmarie A; Carmel, Justin H; Caballero, Marcos D; Fata-Hartley, Cori L; Ebert-May, Diane; Jardeleza, Sarah E; Cooper, Melanie M

    2016-01-01

    Many calls to improve science education in college and university settings have focused on improving instructor pedagogy. Meanwhile, science education at the K-12 level is undergoing significant changes as a result of the emphasis on scientific and engineering practices, crosscutting concepts, and disciplinary core ideas. This framework of "three-dimensional learning" is based on the literature about how people learn science and how we can help students put their knowledge to use. Recently, similar changes are underway in higher education by incorporating three-dimensional learning into college science courses. As these transformations move forward, it will become important to assess three-dimensional learning both to align assessments with the learning environment, and to assess the extent of the transformations. In this paper we introduce the Three-Dimensional Learning Assessment Protocol (3D-LAP), which is designed to characterize and support the development of assessment tasks in biology, chemistry, and physics that align with transformation efforts. We describe the development process used by our interdisciplinary team, discuss the validity and reliability of the protocol, and provide evidence that the protocol can distinguish between assessments that have the potential to elicit evidence of three-dimensional learning and those that do not.

  9. Characterization of carbonaceous aerosol emissions from selected combustion sources

    International Nuclear Information System (INIS)

    Martinez, J.P.G.; Espino, M.P.M.; Pabroa, P.C.B.; Bautista, A.T. VII

    2015-01-01

    Carbonaceous Particulates are carbon-containing solid or liquid matter which form a significant portion of the fine particulate mass (PM2.5) and these have known profound adverse effects on health, climate and visibility. This study aims to characterize carbonaceous aerosol emissions from different combustion sources to establish fingerprints for these for use in the refinement of improvement of the resolution of sources apportionment studies being done by the Philippine Nuclear Research Institute (PNRI), i.e. to resolve vehicular emission sources. Fine air particulate sample were collected in pre-baked Quartz filters using an improvised collection set-up with a Gent sampler. Concentrations of organic and elemental carbon (OC and EC, respectively) in PM2.5 were measured for the different combustion sources—vehicular emissions, tire pyrolysis, and biomass burning, using a thermal-optical method of analysis following the IMPROVE_A protocol. Measured OC ad EC concentrations are shown as percentages with respect to the total carbon (TC) and are illustrated in a 100% stacked chart. Predominance of the EC2 fraction is exhibited in both the diesel fuelled vehicle and tire pyrolysis emissions with EC2/OC2 ratio distinguishing one from the other, EC2/OC2 is 1.63 and 8.41, respectively. Predominance of either OC2 or OC3 fraction is shown in the unleaded gasoline and LPG Fuelled vehicles and in biomass burning with the OC2/OC3 ratio distinguishing one from the others. OC2/OC3 ratios are 1.33 for unleaded gasoline fuelled vehicle, 1.89 for LPG-fuelled vehicle, 0.55 for biomass burning (leaves) and 0.82 biomass burning (wood). The study has shown probable use of the EC2/OC2 and OC2/OC3 ratios to distinguish fingerprints for combustion sources covered in this study. (author)

  10. Estimation of the radiation exposure of a chest pain protocol with ECG-gating in dual-source computed tomography

    International Nuclear Information System (INIS)

    Ketelsen, Dominik; Luetkhoff, Marie H.; Thomas, Christoph; Werner, Matthias; Tsiflikas, Ilias; Reimann, Anja; Kopp, Andreas F.; Claussen, Claus D.; Heuschmid, Martin; Buchgeister, Markus; Burgstahler, Christof

    2009-01-01

    The aim of the study was to evaluate radiation exposure of a chest pain protocol with ECG-gated dual-source computed tomography (DSCT). An Alderson Rando phantom equipped with thermoluminescent dosimeters was used for dose measurements. Exposure was performed on a dual-source computed tomography system with a standard protocol for chest pain evaluation (120 kV, 320 mAs/rot) with different simulated heart rates (HRs). The dose of a standard chest CT examination (120 kV, 160 mAs) was also measured. Effective dose of the chest pain protocol was 19.3/21.9 mSv (male/female, HR 60), 17.9/20.4 mSv (male/female, HR 80) and 14.7/16.7 mSv (male/female, HR 100). Effective dose of a standard chest examination was 6.3 mSv (males) and 7.2 mSv (females). Radiation dose of the chest pain protocol increases significantly with a lower heart rate for both males (p = 0.040) and females (p = 0.044). The average radiation dose of a standard chest CT examination is about 36.5% that of a CT examination performed for chest pain. Using DSCT, the evaluated chest pain protocol revealed a higher radiation exposure compared with standard chest CT. Furthermore, HRs markedly influenced the dose exposure when using the ECG-gated chest pain protocol. (orig.)

  11. Diagnostic accuracy of 128-slice dual-source CT coronary angiography: a randomized comparison of different acquisition protocols

    International Nuclear Information System (INIS)

    Neefjes, Lisan A.; Kate, Gert-Jan R. ten; Rossi, Alexia; Nieman, Koen; Papadopoulou, Stella L.; Dharampal, Anoeshka S.; Dedic, Admir; Feyter, Pim J. de; Mollet, Nico R.; Genders, Tessa S.S.; Hunink, M.G.M.; Schultz, Carl J.; Weustink, Annick C.; Dijkshoorn, Marcel L.; Straten, Marcel van; Cademartiri, Filippo; Krestin, Gabriel P.

    2013-01-01

    To compare the diagnostic performance and radiation exposure of 128-slice dual-source CT coronary angiography (CTCA) protocols to detect coronary stenosis with more than 50 % lumen obstruction. We prospectively included 459 symptomatic patients referred for CTCA. Patients were randomized between high-pitch spiral vs. narrow-window sequential CTCA protocols (heart rate below 65 bpm, group A), or between wide-window sequential vs. retrospective spiral protocols (heart rate above 65 bpm, group B). Diagnostic performance of CTCA was compared with quantitative coronary angiography in 267 patients. In group A (231 patients, 146 men, mean heart rate 58 ± 7 bpm), high-pitch spiral CTCA yielded a lower per-segment sensitivity compared to sequential CTCA (89 % vs. 97 %, P = 0.01). Specificity, PPV and NPV were comparable (95 %, 62 %, 99 % vs. 96 %, 73 %, 100 %, P > 0.05) but radiation dose was lower (1.16 ± 0.60 vs. 3.82 ± 1.65 mSv, P 0.05). Radiation dose of sequential CTCA was lower compared to retrospective CTCA (6.12 ± 2.58 vs. 8.13 ± 4.52 mSv, P < 0.001). Diagnostic performance was comparable in both groups. Sequential CTCA should be used in patients with regular heart rates using 128-slice dual-source CT, providing optimal diagnostic accuracy with as low as reasonably achievable (ALARA) radiation dose. circle 128-slice dual-source CT coronary angiography offers several different acquisition protocols. (orig.)

  12. Characterization of radioactive orphan sources by gamma spectrometry

    International Nuclear Information System (INIS)

    Cruz W, H.

    2013-01-01

    The sealed radioactive sources are widely applicable in industry. They must have a permanent control and must be registered with the Technical Office of the National Authority (OTAN). However, at times it has identified the presence of abandoned sealed sources unknown to the owner. These sources are called 'orphan sources'. Of course these sources represent a high potential risk because accidents can trigger dire consequences depending on your activity and chemical form in which it presents the radioisotope. This paper describes the process and the actions taken to characterize two orphan radioactive sources from the smelter a Aceros Arequipa. For characterization we used a gamma spectrometry system using a detector NaI(Tl) 3″ x 3″ with a multichannel analyzer Nucleus PCA-II. The radioisotope identified was cesium - 137 ( 137 Cs) in both cases. Fortunately, the sources maintained their integrity would otherwise have generated significant pollution considering the chemical form of the radioisotope and easy dispersion. (author)

  13. Characterization of the IOTA Proton Source

    Energy Technology Data Exchange (ETDEWEB)

    Young, Samantha [Chicago U.

    2017-08-11

    This project focuses on characterizing the IOTA proton source through changing the parameters of four various components of the Low Energy Beam Transport (LEBT). Because of an inecient lament, current was limited to 2 mA when 40 mA is ultimately desired. Through an investigation of the solenoids and trims of the LEBT, we sought more knowledge about the optimum settings for running the IOTA proton source.

  14. Invited review, recent developments in brachytherapy source dosimetry

    International Nuclear Information System (INIS)

    Meigooni, A.S.

    2004-01-01

    Application of radioactive isotopes is the treatment of choice around the globe for many cancer sites. In this technique, the accuracy of the radiation delivery is highly dependent on the accuracy of radiation dosimetry around individual brachytherapy sources. Moreover, in order to have compatible clinical results, an identical method of source dosimetry must be employed across the world. This problem has been recently addressed by task group 43 from the American Association of Medical Physics with a protocol for dosimetric characterization of brachytherapy sources. This new protocol has been further updated using published data from international sources, by a new Task Group from the American Association of Medical Physics. This has resulted in an updated protocol known as TG43U1 that has been published in March 2004 issue of Medical Physics. The goal of this presentation is to review the original Task Group 43 protocol and associated algorithms for brachytherapy source dosimetry. In addition, the shortcomings of the original protocol that has been resolved in the updated recommendation will be highlighted. I am sure that this is not the end of the line and more work is needed to complete this task. I invite the scientists to join this task and complete the project, with the hope of much better clinical results for cancer patients

  15. Overview of ion source characterization diagnostics in INTF

    Science.gov (United States)

    Bandyopadhyay, M.; Sudhir, Dass; Bhuyan, M.; Soni, J.; Tyagi, H.; Joshi, J.; Yadav, A.; Rotti, C.; Parmar, Deepak; Patel, H.; Pillai, S.; Chakraborty, A.

    2016-02-01

    INdian Test Facility (INTF) is envisaged to characterize ITER diagnostic neutral beam system and to establish the functionality of its eight inductively coupled RF plasma driver based negative hydrogen ion source and its beamline components. The beam quality mainly depends on the ion source performance and therefore, its diagnostics plays an important role for its safe and optimized operation. A number of diagnostics are planned in INTF to characterize the ion source performance. Negative ions and its cesium contents in the source will be monitored by optical emission spectroscopy (OES) and cavity ring down spectroscopy. Plasma near the extraction region will be studied using standard electrostatic probes. The beam divergence and negative ion stripping losses are planned to be measured using Doppler shift spectroscopy. During initial phase of ion beam characterization, carbon fiber composite based infrared imaging diagnostics will be used. Safe operation of the beam will be ensured by using standard thermocouples and electrical voltage-current measurement sensors. A novel concept, based on plasma density dependent plasma impedance measurement using RF electrical impedance matching parameters to characterize the RF driver plasma, will be tested in INTF and will be validated with OES data. The paper will discuss about the overview of the complete INTF diagnostics including its present status of procurement, experimentation, interface with mechanical systems in INTF, and integration with INTF data acquisition and control systems.

  16. Characterization of a new protocol for mortar dating: 13C and 14C evidences

    International Nuclear Information System (INIS)

    Marzaioli, F.

    2011-01-01

    This paper reviews the present knowledge about the analysis of mortars in the framework of artworks absolute chronology determination with the aim to formulate a new methodology capable of systematically and accurately estimating the age of these constructive and/or art materials. The core of the proposed methodology is represented by a physical procedure (ultrasonication) selecting only carbonaceous materials represented by carbonates formed after the absorption of atmospheric Co2 (carbonatation) by mortars (binder) during their setting. With the aim to evaluate the procedure efficiency in the isolation of binder signal from the most important source of carbonates, the proposed procedure was tested on a series of laboratory mortars produced, in a simplified version, in the laboratory environment. Mortar production was characterized by means of a series of measurements allowing to draw important indications about the applied procedure. The radiocarbon value of isolated binder carbonates was compared with the Co2 signal sampled form laboratory air during mortar setting. The observed results confirmed preliminarily the good protocol accuracy for radiocarbon dating suggesting its capability for the application to real study cases.

  17. A Ten Step Protocol and Plan for CCS Site Characterization, Based on an Analysis of the Rocky Mountain Region, USA

    Energy Technology Data Exchange (ETDEWEB)

    McPherson, Brian; Matthews, Vince

    2013-09-15

    This report expresses a Ten-Step Protocol for CO2 Storage Site Characterization, the final outcome of an extensive Site Characterization analysis of the Rocky Mountain region, USA. These ten steps include: (1) regional assessment and data gathering; (2) identification and analysis of appropriate local sites for characterization; (3) public engagement; (4) geologic and geophysical analysis of local site(s); (5) stratigraphic well drilling and coring; (6) core analysis and interpretation with other data; (7) database assembly and static model development; (8) storage capacity assessment; (9) simulation and uncertainty assessment; (10) risk assessment. While the results detailed here are primarily germane to the Rocky Mountain region, the intent of this protocol is to be portable or generally applicable for CO2 storage site characterization.

  18. Characterizing source-sink dynamics with genetic parentage assignments

    NARCIS (Netherlands)

    Peery, M. Zachariah; Beissinger, Steven R.; House, Roger F.; Berube, Martine; Hall, Laurie A.; Sellas, Anna; Palsboll, Per J.

    2008-01-01

    Source-sink dynamics have been suggested to characterize the population structure of many species, but the prevalence of source-sink systems in nature is uncertain because of inherent challenges in estimating migration rates among populations. Migration rates are often difficult to estimate directly

  19. Free-Space Quantum Key Distribution with a High Generation Rate KTP Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, J.; Chaffee, D.; Wilson, N.; Lekki, J.; Tokars, R.; Pouch, J.; Lind, A.; Cavin, J.; Helmick, S.; Roberts, T.; hide

    2016-01-01

    NASA awarded Small Business Innovative Research (SBIR) contracts to AdvR, Inc to develop a high generation rate source of entangled photons that could be used to explore quantum key distribution (QKD) protocols. The final product, a photon pair source using a dual-element periodically- poled potassium titanyl phosphate (KTP) waveguide, was delivered to NASA Glenn Research Center in June of 2015. This paper describes the source, its characterization, and its performance in a B92 (Bennett, 1992) protocol QKD experiment.

  20. Utilization of Neurophysiological Protocols to Characterize Soldier Response to Irritant Gases. Phase 1.

    Science.gov (United States)

    1990-02-15

    DAMM7-89-C-9136 7. PERFORMING ORGANIZATION NAME(S) AND AOO«£S$<£S) Northeast Reserach Institute, Inc Suite A-100 309 Farmington Avenue...is no widely accepted methodology or protocol lor the assessment of human toxicity induced by exposure to irritant gases. Most procedures used by the...employing the appropriate analytical methodologies necessary to more precisely characterize the complex mixture of low-boiling volatilcs, aerosols, and

  1. Monte Carlo dose characterization of a new 90Sr/90Y source with balloon for intravascular brachytherapy

    International Nuclear Information System (INIS)

    Wang Ruqing; Li, X. Allen; Lobdell, John

    2003-01-01

    Beta emitting source wires or seeds have been adopted in clinical practice of intravascular brachytherapy for coronary vessels. Due to the limitation of penetration depth, this type of source is normally not applicable to treat vessels with large diameter, e.g., peripheral vessel. In the effort to extend application of its beta source for peripheral vessels, Novoste has recently developed a new catheter-based system, the Corona trade mark sign 90 Sr/ 90 Y system. It is a source train of 6 cm length and is jacketed by a balloon. The existence of the balloon increases the penetration of the beta particles and maintains the source within a location away from the vessel wall. Using the EGSnrc Monte Carlo system, we have calculated the two-dimensional (2-D) dose rate distribution of the Corona trade mark sign system in water for a balloon diameter of 5 mm. The dose rates on the transverse axis obtained in this study are in good agreement with calibration results of the National Institute of Standards and Technology for the same system for balloon diameters of 5 and 8 mm. Features of the 2-D dose field were studied in detail. The dose parameters based on AAPM TG-60 protocol were derived. For a balloon diameter of 5 mm, the dose rate at the reference point (defined as r 0 =4.5 mm, 2 mm from the balloon surface) is found to be 0.010 28 Gy min -1 mCi -1 . A new formalism for a better characterization of this long source is presented. Calculations were also performed for other balloon diameters. The dosimetry for this source is compared with a 192 Ir source, commonly used for peripheral arteries. In conclusion, we have performed a detailed dosimetric characterization for a new beta source for peripheral vessels. Our study shows that, from dosimetric point of view, the Corona trade mark sign system can be used for the treatment of an artery with a large diameter, e.g., peripheral vessel

  2. Radiation dose of cardiac dual-source CT: The effect of tailoring the protocol to patient-specific parameters

    International Nuclear Information System (INIS)

    Alkadhi, Hatem; Stolzmann, Paul; Scheffel, Hans; Desbiolles, Lotus; Baumueller, Stephan; Plass, Andre; Genoni, Michele; Marincek, Borut; Leschka, Sebastian

    2008-01-01

    Objective: To determine the radiation doses and image quality of different dual-source computed tomography coronary angiography (CTCA) protocols tailored to the heart rate (HR) and body mass index (BMI) of the patients. Materials and methods: Two hundred consecutive patients (68 women; mean age 61 ± 9 years) underwent either helical CTCA with retrospective ECG-gating or sequential CT with prospective ECG-triggering: 50 patients (any BMI, any HR) were examined with a standard, non-tailored protocol (helical CTCA, 120 kV, 330 mAs), whereas the other 150 patients were examined with a tailored protocol: 40 patients (group A, BMI ≤ 25 kg/sqm, HR ≤ 70 bpm) with sequential CTCA (100 kV, 190 mAs ref ), 43 patients (group B, BMI ≤ 25 kg/sqm, HR > 70 bpm) with helical CTCA (100 kV, 220 mAs), 28 patients (group C, BMI > 25 kg/sqm, HR ≤ 70 bpm) with sequential CTCA (120 kV, 330 mAs ref ), and 39 patients (group D, BMI > 25 kg/sqm, HR > 70 bpm) with helical CTCA (120 kV, 330 mAs). The effective radiation dose estimates were calculated from the dose-length-product for each patient. Image quality was classified as being diagnostic or non-diagnostic in each coronary segment. Results: Image quality was diagnostic in 2403/2460 (98%) and non-diagnostic in 57/2460 (2%) of all coronary segments. No significant differences in image quality were found among all five CTCA protocols (p = 0.78). The non-tailored helical CTCA protocol was associated with a radiation dose of 9.0 ± 1.0 mSv, being significantly higher compared to that using sequential CTCA (group A: 1.3 ± 0.3 mSv, p 70 bpm (group D: 8.5 ± 0.9 mSv, p = 0.51). Conclusions: Dual-source CTCA is associated with radiation doses ranging between 1.3 and 9.0 mSv, depending on the protocol used. Tailoring of the CTCA protocol to the HR and BMI of the individual patient results in dose reductions of up to 86%, while maintaining a diagnostic image quality of the examination

  3. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  4. Characterization of Greater-Than-Class C sealed sources. Volume 2, Sealed source characterization and future production

    International Nuclear Information System (INIS)

    Harris, G.; Griffel, A.

    1994-09-01

    Sealed sources are small, relatively high-activity radioactive sources typically encapsulated in a metallic container. The activities can range from less than 1 mCi to over 1,000 Ci. They are used in a variety of industries and are commonly available. Many of the sources will be classified as Greater-Than-Class C low-level radioactive waste (GTCC LLW) for the purpose of waste disposal. The US Department of Energy is responsible for disposing of this class of low-level radioactive waste. The characterization of a sealed source is essentially a function of the type of radiation it emits, the principal use for which it is applied, and the activity it contains. The types of radiation of most interest to the GTCC LLW Program are gamma rays and neutrons, since these are emitted by the highest activity sources. The principal uses of most importance are gamma irradiators, medical teletherapy, well logging probes, and other general neutron applications. Current annual production rates of potential Greater-Than-Class C (PGTCC) sources sold to specific licensees were estimated based on data collected from device manufacturers. These estimates were then adjusted for current trends in the industry to estimate future annual production rates. It is expected that there will be approximately 8,000 PGTCC sealed sources produced annually for specific licensees

  5. Concretes characterization for spent radioactive sources

    International Nuclear Information System (INIS)

    Martinez B, J.; Monroy G, F. P.

    2013-10-01

    The present work includes the preparation and characterization of the concrete used as conditioning matrix of spent radioactive sources in the Treatment Plant of Radioactive Wastes of the Instituto Nacional de Investigaciones Nucleares (ININ). The concrete tests tubes were subjected to resistance assays to the compression, leaching, resistance to the radiation and porosity, and later on characterized by means of X rays diffraction, scanning electron microscopy and infrared spectrometry, with the purpose of evaluating if this concrete accredits the established tests by the NOM-019-Nucl-1995. The results show that the concrete use in the Treatment Plant fulfills the requirements established by the NOM-019-Nucl-1995. (author)

  6. Scanning protocol of dual-source computed tomography for aortic dissection

    International Nuclear Information System (INIS)

    Zhai Mingchun; Wang Yongmei

    2013-01-01

    Objective: To find a dual-source CT scanning protocol which can obtain high image quality with low radiation dose for diagnosis of aortic dissection. Methods: Total 120 patients with suspected aortic dissection were randomly and equally assigned into three groups. Patients in Croup A were performed CTA exam with prospectively electrocardiogram- gated high pitch spiral mode (FLASH). Patients in Croup B were performed CTA exam with retrospective electrocardiogram- gated spiral mode. Patients in Croup C were performed CTA exam with conventional mode which no electrocardiogram-gated. The image quality, radiation dose, advantages and disadvantages among the three scan protocol were analyzed. Results: For image quality, seventeen, twenty two and one patients in group A were granted to grade 1, 2, 3 respectively, and none was in grade 4; thirty three and seven patients in group B were granted to grade 1, 2, respectively, and none was in grade 3 and 4; fourteen and twenty six patients in group C were granted to grade 3, 4, respectively, and none was in grade 1 and 2. There was no significant difference between group A and B in image quality. Compared with the image quality, Group A and B were significantly higher than Group C. Mean effective radiation dose of Croup A, B and C were 7.7±0.4 mSv, 33.11±3.38 mSv, and 7.6±0.68 mSv, respectively. Group B was significantly higher than Groups A and C (P<0.05, P<0.05, respectively), and there was no significant difference between Group A and C (P=0.826). Conclusions: Prospectively electrocardiogram-gated high pitch spiral mode can be the first line protocol for evaluation of aortic dissection. It can achieve high image quality with low radiation dose. Conventional mode with no electrocardiogram-gated can be selectively used for Stanford B aortic dissection. (authors)

  7. Identification and chemical characterization of industrial particulate matter sources in southwest Spain.

    Science.gov (United States)

    Alastuey, Andrés; Querol, Xavier; Plana, Feliciano; Viana, Mar; Ruiz, Carmen R; Sánchez de la Campa, Ana; de la Rosa, Jesús; Mantilla, Enrique; García dos Santos, Saul

    2006-07-01

    A detailed physical and chemical characterization of coarse particulate matter (PM10) and fine particulate matter (PM2.5) in the city of Huelva (in Southwestern Spain) was carried out during 2001 and 2002. To identify the major emission sources with a significant influence on PM10 and PM2.5, a methodology was developed based on the combination of: (1) real-time measurements of levels of PM10, PM2.5, and very fine particulate matter (PM1); (2) chemical characterization and source apportionment analysis of PM10 and PM2.5; and (3) intensive measurements in field campaigns to characterize the emission plumes of several point sources. Annual means of 37, 19, and 16 microg/m3 were obtained for the study period for PM10, PM2.5, and PM1, respectively. High PM episodes, characterized by a very fine grain size distribution, are frequently detected in Huelva mainly in the winter as the result of the impact of the industrial emission plumes on the city. Chemical analysis showed that PM at Huelva is characterized by high PO4(3-) and As levels, as expected from the industrial activities. Source apportionment analyses identified a crustal source (36% of PM10 and 31% of PM2.5); a traffic-related source (33% of PM10 and 29% of PM2.5), and a marine aerosol contribution (only in PM10, 4%). In addition, two industrial emission sources were identified in PM10 and PM2.5: (1) a petrochemical source, 13% in PM10 and 8% in PM2.5; and (2) a mixed metallurgical-phosphate source, which accounts for 11-12% of PM10 and PM2.5. In PM2.5 a secondary source has been also identified, which contributed to 17% of the mass. A complete characterization of industrial emission plumes during their impact on the ground allowed for the identification of tracer species for specific point sources, such as petrochemical, metallurgic, and fertilizer and phosphate production industries.

  8. A cryogenic thermal source for detector array characterization

    Science.gov (United States)

    Chuss, David T.; Rostem, Karwan; Wollack, Edward J.; Berman, Leah; Colazo, Felipe; DeGeorge, Martin; Helson, Kyle; Sagliocca, Marco

    2017-10-01

    We describe the design, fabrication, and validation of a cryogenically compatible quasioptical thermal source for characterization of detector arrays. The source is constructed using a graphite-loaded epoxy mixture that is molded into a tiled pyramidal structure. The mold is fabricated using a hardened steel template produced via a wire electron discharge machining process. The absorptive mixture is bonded to a copper backplate enabling thermalization of the entire structure and measurement of the source temperature. Measurements indicate that the reflectance of the source is <0.001 across a spectral band extending from 75 to 330 GHz.

  9. Aero particles characterization emitted by mobile sources

    International Nuclear Information System (INIS)

    Rojas V, A.; Romero G, E. T.; Lopez G, H.

    2009-01-01

    In our country, the mobile sources that conform most of the emissions at the atmosphere, are concentrated on the urban areas. For the present work, samples coming from the escapes of terrestrial transport were obtained, such as: passenger buses, load transport and particular vehicles of the Metropolitan area of the Toluca valley. The material was analyzed by means of scanning electron microscopy of low vacuum and X-ray diffraction. The objective was to characterize the emitted particles by mobile sources, morphological and chemically to know the structure, size and elements that compose them. (Author)

  10. Quantum Communication with a High-Rate Entangled Photon Source

    Science.gov (United States)

    Wilson, Nathaniel C.; Chaffee, Dalton W.; Lekki, John D.; Wilson, Jeffrey D.

    2016-01-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.

  11. A MODIFIED ROUTE DISCOVERY APPROACH FOR DYNAMIC SOURCE ROUTING (DSR PROTOCOL IN MOBILE AD-HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    Alaa Azmi Allahham

    2017-02-01

    Full Text Available Mobile Ad-hoc networks (MANETs involved in many applications, whether commercial or military because of their characteristics that do not depend on the infrastructure as well as the freedom movement of their elements, but in return has caused this random mobility of the nodes many of the challenges, where the routing is considered one of these challenges. There are many types of routing protocols that operate within MANET networks, which responsible for finding paths between the source and destination nodes with the modernization of these paths which are constantly changing due to the dynamic topology of the network stemming from the constant random movement of the nodes. The DSR (Dynamic Source Routing routing protocol algorithm is one of these routing protocols which consist of two main stages; route discovery and maintenance, where the route discovery algorithm operates based on blind flooding of request messages. blind flooding is considered as the most well known broadcasting mechanism, it is inefficient in terms of communication and resource utilization, which causing increasing the probability of collisions, repeating send several copies of the same message, as well as increasing the delay. Hence, a new mechanism in route discovery stage and in caching the routes in DSR algorithm according to the node's location in the network and the direction of the broadcast is proposed for better performance especially in terms of delay as well as redundant packets rate. The implementation of proposed algorithms showed positive results in terms of delay, overhead, and improve the performance of MANETs in general.

  12. Quantum-key-distribution protocol with pseudorandom bases

    Science.gov (United States)

    Trushechkin, A. S.; Tregubov, P. A.; Kiktenko, E. O.; Kurochkin, Y. V.; Fedorov, A. K.

    2018-01-01

    Quantum key distribution (QKD) offers a way for establishing information-theoretical secure communications. An important part of QKD technology is a high-quality random number generator for the quantum-state preparation and for post-processing procedures. In this work, we consider a class of prepare-and-measure QKD protocols, utilizing additional pseudorandomness in the preparation of quantum states. We study one of such protocols and analyze its security against the intercept-resend attack. We demonstrate that, for single-photon sources, the considered protocol gives better secret key rates than the BB84 and the asymmetric BB84 protocols. However, the protocol strongly requires single-photon sources.

  13. A source-independent empirical correction procedure for the fast mobility and engine exhaust particle sizers

    Science.gov (United States)

    Zimmerman, Naomi; Jeong, Cheol-Heon; Wang, Jonathan M.; Ramos, Manuel; Wallace, James S.; Evans, Greg J.

    2015-01-01

    The TSI Fast Mobility Particle Sizer (FMPS) and Engine Exhaust Particle Sizer (EEPS) provide size distributions for 6-560 nm particles with a time resolution suitable for characterizing transient particle sources; however, the accuracy of these instruments can be source dependent, due to influences of particle morphology. The aim of this study was to develop a source-independent correction protocol for the FMPS and EEPS. The correction protocol consists of: (1) broadening the >80 nm size range of the distribution to account for under-sizing by the FMPS and EEPS; (2) applying an existing correction protocol in the 8-93 nm size range; and (3) dividing each size bin by the ratio of total concentration measured by the FMPS or EEPS and a water-based Condensation Particle Counter (CPC) as a surrogate scaling factor to account for particle morphology. Efficacy of the correction protocol was assessed for three sources: urban ambient air, diluted gasoline direct injection engine exhaust, and diluted diesel engine exhaust. Linear regression against a reference instrument, the Scanning Mobility Particle Sizer (SMPS), before and after applying the correction protocol demonstrated that the correction ensured agreement within 20%.

  14. Integrated Characterization of DNAPL Source Zone Architecture in Clay Till and Limestone Bedrock

    DEFF Research Database (Denmark)

    Broholm, Mette Martina; Janniche, Gry Sander; Fjordbøge, Annika Sidelmann

    2014-01-01

    Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives. The activi......Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives...... innovative investigation methods and characterize the source zone hydrogeology and contamination to obtain an improved conceptual understanding of DNAPL source zone architecture in clay till and bryozoan limestone bedrock. Approach/Activities. A wide range of innovative and current site investigative tools...... for direct and indirect documentation and/or evaluation of DNAPL presence were combined in a multiple lines of evidence approach. Results/Lessons Learned. Though no single technique was sufficient for characterization of DNAPL source zone architecture, the combined use of membrane interphase probing (MIP...

  15. Characterizing sources of emissions from wildland fires

    Science.gov (United States)

    Roger D. Ottmar; Ana Isabel Miranda; David V. Sandberg

    2009-01-01

    Smoke emissions from wildland fire can be harmful to human health and welfare, impair visibility, and contribute to greenhouse gas emissions. The generation of emissions and heat release need to be characterized to estimate the potential impacts of wildland fire smoke. This requires explicit knowledge of the source, including size of the area burned, burn period,...

  16. Characterization of DBD plasma source for biomedical applications

    Energy Technology Data Exchange (ETDEWEB)

    Kuchenbecker, M; Vioel, W [University of Applied Sciences and Arts, Faculty of Natural Sciences and Technology, Von-Ossietzky-Str. 99, 37085 Goettingen (Germany); Bibinov, N; Awakowicz, P [Institute for Electrical Engineering and Plasma Technology, Ruhr-Universitaet Bochum, Universitaetstr. 150, 44780 Bochum (Germany); Kaemlimg, A; Wandke, D, E-mail: m.kuchenbecker@web.d, E-mail: Nikita.Bibinov@rub.d, E-mail: awakowicz@aept-ruhr-uni-bochum.d, E-mail: vioel@hawk-hhg.d [CINOGY GmbH, Max-Naeder-Str. 15, 37114 Duderstadt (Germany)

    2009-02-21

    The dielectric barrier discharge (DBD) plasma source for biomedical application is characterized using optical emission spectroscopy, plasma-chemical simulation and voltage-current measurements. This plasma source possesses only one electrode covered by ceramic. Human body or some other object with enough high electric capacitance or connected to ground can serve as the opposite electrode. DBD consists of a number of microdischarge channels distributed in the gas gap between the electrodes and on the surface of the dielectric. To characterize the plasma conditions in the DBD source, an aluminium plate is used as an opposite electrode. Electric parameters, the diameter of microdischarge channel and plasma parameters (electron distribution function and electron density) are determined. The gas temperature is measured in the microdischarge channel and calculated in afterglow phase. The heating of the opposite electrode is studied using probe measurement. The gas and plasma parameters in the microdischarge channel are studied at varied distances between electrodes. According to an energy balance study, the input microdischarge electric energy dissipates mainly in heating of electrodes (about 90%) and partially (about 10%) in the production of chemical active species (atoms and metastable molecules).

  17. Developing a source-receptor methodology for the characterization of VOC sources in ambient air

    International Nuclear Information System (INIS)

    Borbon, A.; Badol, C.; Locoge, N.

    2005-01-01

    Since 2001, in France, a continuous monitoring of about thirty ozone precursor non-methane hydrocarbons (NMHC) is led in some urban areas. The automated system for NMHC monitoring consists of sub-ambient preconcentration on a cooled multi-sorbent trap followed by thermal desorption and bidimensional Gas Chromatography/Flame Ionisation Detection analysis.The great number of data collected and their exploitation should provide a qualitative and quantitative assessment of hydrocarbon sources. This should help in the definition of relevant strategies of emission regulation as stated by the European Directive relative to ozone in ambient air (2002/3/EC). The purpose of this work is to present the bases and the contributions of an original methodology known as source-receptor in the characterization of NMHC sources. It is a statistical and diagnostic approach, adaptable and transposable in all urban sites, which integrates the spatial and temporal dynamics of the emissions. The methods for source identification combine descriptive or more complex complementary approaches: 1) univariate approach through the analysis of NMHC time series and concentration roses, 2) bivariate approach through a Graphical Ratio Analysis and a characterization of scatterplot distributions of hydrocarbon pairs, 3) multivariate approach with Principal Component Analyses on various time basis. A linear regression model is finally developed to estimate the spatial and temporal source contributions. Apart from vehicle exhaust emissions, sources of interest are: combustion and fossil fuel-related activities, petrol and/or solvent evaporation, the double anthropogenic and biogenic origin of isoprene and other industrial activities depending on local parameters. (author)

  18. Spectroscopic characterization of low dose rate brachytherapy sources

    Science.gov (United States)

    Beach, Stephen M.

    The low dose rate (LDR) brachytherapy seeds employed in permanent radioactive-source implant treatments usually use one of two radionuclides, 125I or 103Pd. The theoretically expected source spectroscopic output from these sources can be obtained via Monte Carlo calculation based upon seed dimensions and materials as well as the bare-source photon emissions for that specific radionuclide. However the discrepancies resulting from inconsistent manufacturing of sources in comparison to each other within model groups and simplified Monte Carlo calculational geometries ultimately result in undesirably large uncertainties in the Monte Carlo calculated values. This dissertation describes experimentally attained spectroscopic outputs of the clinically used brachytherapy sources in air and in liquid water. Such knowledge can then be applied to characterize these sources by a more fundamental and metro logically-pure classification, that of energy-based dosimetry. The spectroscopic results contained within this dissertation can be utilized in the verification and benchmarking of Monte Carlo calculational models of these brachytherapy sources. This body of work was undertaken to establish a usable spectroscopy system and analysis methods for the meaningful study of LDR brachytherapy seeds. The development of a correction algorithm and the analysis of the resultant spectroscopic measurements are presented. The characterization of the spectrometer and the subsequent deconvolution of the measured spectrum to obtain the true spectrum free of any perturbations caused by the spectrometer itself is an important contribution of this work. The approach of spectroscopic deconvolution that was applied in this work is derived in detail and it is applied to the physical measurements. In addition, the spectroscopically based analogs to the LDR dosimetry parameters that are currently employed are detailed, as well as the development of the theory and measurement methods to arrive at these

  19. Playing With Population Protocols

    Directory of Open Access Journals (Sweden)

    Xavier Koegler

    2009-06-01

    Full Text Available Population protocols have been introduced as a model of sensor networks consisting of very limited mobile agents with no control over their own movement: A collection of anonymous agents, modeled by finite automata, interact in pairs according to some rules. Predicates on the initial configurations that can be computed by such protocols have been characterized under several hypotheses. We discuss here whether and when the rules of interactions between agents can be seen as a game from game theory. We do so by discussing several basic protocols.

  20. Characterization of the pulse plasma source

    International Nuclear Information System (INIS)

    Milosavljevic, V; Karkari, S K; Ellingboe, A R

    2007-01-01

    Characterization of the pulse plasma source through the determination of the local thermodynamic equilibrium (LTE) threshold is described. The maximum electron density measured at the peak in discharge current is determined by the width of the He II Paschen alpha spectral line, and the electron temperature is determined from the ratios of the relative intensities of spectral lines emitted from successive ionized stages of atoms. The electron density and temperature maximum values are measured to be 1.3 x 10 17 cm -3 and 19 000 K, respectively. These are typical characteristics for low-pressure, pulsed plasma sources for input energy of 15.8 J at 130 Pa pressure in helium-argon mixture. The use of LTE-based analysis of the emission spectra is justified by measurement of the local plasma electron density at four positions in the discharge tube using a floating hairpin resonance probe. The hairpin resonance probe data are collected during the creation and decay phases of the pulse. From the spatio-temporal profile of the plasma density a 60 μs time-window during which LTE exists throughout the entire plasma source is determined

  1. Coconut coir pith lignin: A physicochemical and thermal characterization.

    Science.gov (United States)

    Asoka Panamgama, L; Peramune, P R U S K

    2018-07-01

    The structural and thermal features of coconut coir pith lignin, isolated by three different extraction protocols incorporating two different energy supply sources, were characterized by different analytical tools. The three different chemical extraction protocols were alkaline - 7.5% (w/v) NaOH, organosolv - 85% (v/v) formic and acetic acids at 7:3 (v/v) ratio and polyethylene glycol (PEG): water ratio at 80:20wt%. The two sources of energy were thermal or microwave. Raw lignins were modified by epichlorohydrin to enhance reactivity, and the characteristics of raw and modified lignins were comparatively analysed. Using the thermal energy source, the alkaline and organosolv processes obtained the highest and lowest lignin yields of 26.4±1.5wt% and 3.4±0.2wt%, respectively, as shown by wet chemical analysis. Specific functional group analysis by Fourier transform infrared spectra (FTIR) revealed that significantly different amounts of hydroxyl and carbonyl groups exist in alkaline, organosolv and PEG lignins. Thermogravimetric analysis (TGA) illustrated that the lowest degradation onset temperature was recorded for organosolv lignin, and the overall order was organosolvprotocol, microwave energy provided the highest wt% loss rate, indicating the lowest thermal stability. The derivative temperature difference profiles from the microwave and thermal heating sources for different extraction protocols are discussed in detail. These findings show that lignin extraction from coir pith can be performed efficiently with several protocols and that those methods offer practical value to industry. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Portable abdomen radiography. Moving to thickness-based protocols

    International Nuclear Information System (INIS)

    Sanchez, Adrian A.; Reiser, Ingrid; Baxter, Tina; Zhang, Yue; Finkle, Joshua H.; Lu, Zheng Feng; Feinstein, Kate A.

    2018-01-01

    Default pediatric protocols on many digital radiography systems are configured based on patient age. However, age does not adequately characterize patient size, which is the principal determinant of proper imaging technique. Use of default pediatric protocols by inexperienced technologists can result in patient overexposure, inadequate image quality, or repeated examinations. To ensure diagnostic image quality at a well-managed patient radiation exposure by transitioning to thickness-based protocols for pediatric portable abdomen radiography. We aggregated patient thickness data, milliamperes (mAs), kilovoltage peak (kVp), exposure index (EI), source-to-detector distance, and grid use for all portable abdomen radiographs performed in our pediatric hospital in a database with a combination of automated and manual data collection techniques. We then analyzed the database and used it as the basis to construct thickness-based protocols with consistent image quality across varying patient thicknesses, as determined by the EI. Retrospective analysis of pediatric portable exams performed at our adult-focused hospitals demonstrated substantial variability in EI relative to our pediatric hospital. Data collection at our pediatric hospital over 4 months accumulated roughly 800 portable abdomen exams, which we used to develop a thickness-based technique chart. Through automated retrieval of data in our systems' digital radiography exposure logs and recording of patient abdomen thickness, we successfully developed thickness-based techniques for portable abdomen radiography. (orig.)

  3. Portable abdomen radiography. Moving to thickness-based protocols

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Adrian A.; Reiser, Ingrid; Baxter, Tina; Zhang, Yue; Finkle, Joshua H.; Lu, Zheng Feng; Feinstein, Kate A. [University of Chicago Medical Center, Department of Radiology, Chicago, IL (United States)

    2018-02-15

    Default pediatric protocols on many digital radiography systems are configured based on patient age. However, age does not adequately characterize patient size, which is the principal determinant of proper imaging technique. Use of default pediatric protocols by inexperienced technologists can result in patient overexposure, inadequate image quality, or repeated examinations. To ensure diagnostic image quality at a well-managed patient radiation exposure by transitioning to thickness-based protocols for pediatric portable abdomen radiography. We aggregated patient thickness data, milliamperes (mAs), kilovoltage peak (kVp), exposure index (EI), source-to-detector distance, and grid use for all portable abdomen radiographs performed in our pediatric hospital in a database with a combination of automated and manual data collection techniques. We then analyzed the database and used it as the basis to construct thickness-based protocols with consistent image quality across varying patient thicknesses, as determined by the EI. Retrospective analysis of pediatric portable exams performed at our adult-focused hospitals demonstrated substantial variability in EI relative to our pediatric hospital. Data collection at our pediatric hospital over 4 months accumulated roughly 800 portable abdomen exams, which we used to develop a thickness-based technique chart. Through automated retrieval of data in our systems' digital radiography exposure logs and recording of patient abdomen thickness, we successfully developed thickness-based techniques for portable abdomen radiography. (orig.)

  4. Linear Logical Voting Protocols

    DEFF Research Database (Denmark)

    DeYoung, Henry; Schürmann, Carsten

    2012-01-01

    Current approaches to electronic implementations of voting protocols involve translating legal text to source code of an imperative programming language. Because the gap between legal text and source code is very large, it is difficult to trust that the program meets its legal specification. In r...

  5. Installation and Characterization of Charged Particle Sources for Space Environmental Effects Testing

    Science.gov (United States)

    Skevington, Jennifer L.

    2010-01-01

    Charged particle sources are integral devices used by Marshall Space Flight Center s Environmental Effects Branch (EM50) in order to simulate space environments for accurate testing of materials and systems. By using these sources inside custom vacuum systems, materials can be tested to determine charging and discharging properties as well as resistance to sputter damage. This knowledge can enable scientists and engineers to choose proper materials that will not fail in harsh space environments. This paper combines the steps utilized to build a low energy electron gun (The "Skevington 3000") as well as the methods used to characterize the output of both the Skevington 3000 and a manufactured Xenon ion source. Such characterizations include beam flux, beam uniformity, and beam energy. Both sources were deemed suitable for simulating environments in future testing.

  6. Analyzing the effect of routing protocols on media access control protocols in radio networks

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C. L. (Christopher L.); Drozda, M. (Martin); Marathe, A. (Achla); Marathe, M. V. (Madhav V.)

    2002-01-01

    We study the effect of routing protocols on the performance of media access control (MAC) protocols in wireless radio networks. Three well known MAC protocols: 802.11, CSMA, and MACA are considered. Similarly three recently proposed routing protocols: AODV, DSR and LAR scheme 1 are considered. The experimental analysis was carried out using GloMoSim: a tool for simulating wireless networks. The main focus of our experiments was to study how the routing protocols affect the performance of the MAC protocols when the underlying network and traffic parameters are varied. The performance of the protocols was measured w.r.t. five important parameters: (i) number of received packets, (ii) average latency of each packet, (iii) throughput (iv) long term fairness and (v) number of control packets at the MAC layer level. Our results show that combinations of routing and MAC protocols yield varying performance under varying network topology and traffic situations. The result has an important implication; no combination of routing protocol and MAC protocol is the best over all situations. Also, the performance analysis of protocols at a given level in the protocol stack needs to be studied not locally in isolation but as a part of the complete protocol stack. A novel aspect of our work is the use of statistical technique, ANOVA (Analysis of Variance) to characterize the effect of routing protocols on MAC protocols. This technique is of independent interest and can be utilized in several other simulation and empirical studies.

  7. SiPMs coated with TPB: coating protocol and characterization for NEXT

    International Nuclear Information System (INIS)

    Álvarez, V; Agramunt, J; Ball, M; Bayarri, J; Cárcel, S; Cervera, A; Díaz, J; Batallé, M; Borges, F I G; Conde, C A N; Dias, T H V T; Bolink, H; Brine, H; Carmona, J M; Castel, J; Cebrián, S; Dafni, T; Catalá, J M; Esteve, R; Chan, D

    2012-01-01

    Silicon photomultipliers (SiPM) are the photon detectors chosen for the tracking readout in NEXT, a neutrinoless ββ decay experiment which uses a high pressure gaseous xenon time projection chamber (TPC). The reconstruction of event track and topology in this gaseous detector is a key handle for background rejection. Among the commercially available sensors that can be used for tracking, SiPMs offer important advantages, mainly high gain, ruggedness, cost-effectiveness and radio-purity. Their main drawback, however, is their non sensitivity in the emission spectrum of the xenon scintillation (peak at 175 nm). This is overcome by coating these sensors with the organic wavelength shifter tetraphenyl butadiene (TPB). In this paper we describe the protocol developed for coating the SiPMs with TPB and the measurements performed for characterizing the coatings as well as the performance of the coated sensors in the UV-VUV range.

  8. A Site Characterization Protocol for Evaluating the Potential for Triggered or Induced Seismicity Resulting from Wastewater Injection and Hydraulic Fracturing

    Science.gov (United States)

    Walters, R. J.; Zoback, M. D.; Gupta, A.; Baker, J.; Beroza, G. C.

    2014-12-01

    Regulatory and governmental agencies, individual companies and industry groups and others have recently proposed, or are developing, guidelines aimed at reducing the risk associated with earthquakes triggered by waste water injection or hydraulic fracturing. While there are a number of elements common to the guidelines proposed, not surprisingly, there are also some significant differences among them and, in a number of cases, important considerations that are not addressed. The goal of this work is to develop a comprehensive protocol for site characterization based on a rigorous scientific understanding of the responsible processes. Topics addressed will include the geologic setting (emphasizing faults that might be affected), historical seismicity, hydraulic characterization of injection and adjacent intervals, geomechanical characterization to identify potentially active faults, plans for seismic monitoring and reporting, plans for monitoring and reporting injection (pressure, volumes, and rates), other factors contributing to risk (potentially affected population centers, structures, and facilities), and implementing a modified Probabilistic Seismic Hazard Analysis (PSHA). The guidelines will be risk based and adaptable, rather than prescriptive, for a proposed activity and region of interest. They will be goal oriented and will rely, to the degree possible, on established best practice procedures, referring to existing procedures and recommendations. By developing a risk-based site characterization protocol, we hope to contribute to the development of rational and effective measures for reducing the risk posed by activities that potentially trigger earthquakes.

  9. Computational Biology Methods for Characterization of Pluripotent Cells.

    Science.gov (United States)

    Araúzo-Bravo, Marcos J

    2016-01-01

    Pluripotent cells are a powerful tool for regenerative medicine and drug discovery. Several techniques have been developed to induce pluripotency, or to extract pluripotent cells from different tissues and biological fluids. However, the characterization of pluripotency requires tedious, expensive, time-consuming, and not always reliable wet-lab experiments; thus, an easy, standard quality-control protocol of pluripotency assessment remains to be established. Here to help comes the use of high-throughput techniques, and in particular, the employment of gene expression microarrays, which has become a complementary technique for cellular characterization. Research has shown that the transcriptomics comparison with an Embryonic Stem Cell (ESC) of reference is a good approach to assess the pluripotency. Under the premise that the best protocol is a computer software source code, here I propose and explain line by line a software protocol coded in R-Bioconductor for pluripotency assessment based on the comparison of transcriptomics data of pluripotent cells with an ESC of reference. I provide advice for experimental design, warning about possible pitfalls, and guides for results interpretation.

  10. LEVIS ion source and beam characterization on PBFA-II

    International Nuclear Information System (INIS)

    Renk, T.J.; Tisone, G.C.; Adams, R.G.; Bailey, J.E.; Filuk, A.B.; Johnson, D.J.; Pointon, T.D.

    1993-01-01

    We report on the continuing development of the LEVIS (Laser Evaporation Ion Source) lithium active ion source for the 15-cm radial focussing ion diode on PBFA-11. We found previously that DC-heating of the anode surface to 150 degrees C maximum for 5 hours resulted in a pure lithium beam. This paper discusses the characterization of LEVIS source uniformity by Faraday cup arrays and multiple lines of sight for visible light spectroscopy. These diagnostics give some evidence of nonuniformity in both A-K gap electric fields and ion current density. Despite this, however, the measured focal spot size appears smaller than with a passive LiF source operated in the same magnetic field topology. Experiments using a curved anode for vertical beam focussing show reduced ion beam turn-on delay by 5 ns by altering the magnetic field topology as well as anode curvature. Another 3--5 ns reduction was achieved by switching from a passive LiF to the active LEVIS source

  11. Characterization and packaging of disused sealed radioactive sources

    International Nuclear Information System (INIS)

    Aguilar, S.L.

    2013-01-01

    In Bolivia are generated disused sealed sources and radioactive waste resulting from the use of radioactive materials in industrial, research and medicine. The last includes the diagnosis and treatment. Whereas exposure to ionizing radiation is a potential hazard to personnel who applies it, to those who benefit from its use or for the community at large, it is necessary to control the activities in this field. The Instituto Boliviano de Ciencia y Tecnologia Nuclear - IBTEN is working on a regional project from International Atomic Energy Agency - IAEA, RLA/09/062 Project - TSA 4, Strengthening the National Infrastructure and Regulatory Framework for the Safe Management of Radioactive waste in Latin America. This Project has strengthened the regulatory framework regarding the safe management of radioactive waste. The aim of this work was focused primarily on the security aspects in the safe management of disused sealed sources. The tasks are listed below: 1. Characterization of disused sealed sources 2. Preparation for transport to temporary storage 3. Control of all disused radioactive sources. (author)

  12. Using a tungsten rollbar to characterize the source spot of a megavoltage bremsstrahlung linac

    International Nuclear Information System (INIS)

    Schach von Wittenau, A.E.; Logan, C.M.; Rikard, R.

    2002-01-01

    In photon teletherapy, the size and functional form of the photon source spot affect both the sharpness of the penumbra of treatment fields and the sharpness of portal images. Photon source spot parameters are also used in photon teletherapy dose calculation codes. A simple method for characterizing the source spot would complement the existing, more involved methods that have been described in the medical physics literature. Such a method, using a rollbar made of tungsten or other high-Z metal, is used in industrial radiography. We describe the use of a tungsten rollbar for characterizing the source spot edge spread function (and thereby the source spot size and shape) of a megavoltage bremsstrahlung photon source. We use Monte Carlo simulations to quantify anticipated experimental artifacts of the method, assuming typical spot sizes for circ-function, Gaussian, and Bennett line shapes. We illustrate the use of the rollbar method by characterizing the source spot of a typical 9 MV linac used for industrial radiography. The source spot is analyzed using two approaches: (a) fitting the rollbar image with analytic functions and (b) using Abel inversion to obtain the cylindrically symmetric spot profile consistent with the measured rollbar image. Monte Carlo simulations, based on a 6 MV photon teletherapy accelerator, suggest that aspects of the method are applicable to medical bremsstrahlung sources

  13. Dual-energy CT for the characterization of urinary calculi: In vitro and in vivo evaluation of a low-dose scanning protocol

    International Nuclear Information System (INIS)

    Thomas, C.; Patschan, O.; Nagele, U.; Stenzl, A.; Ketelsen, D.; Tsiflikas, I.; Reimann, A.; Brodoefel, H.; Claussen, C.; Kopp, A.; Heuschmid, M.; Schlemmer, H.P.; Buchgeister, M.

    2009-01-01

    The efficiency and radiation dose of a low-dose dual-energy (DE) CT protocol for the evaluation of urinary calculus disease were evaluated. A low-dose dual-source DE-CT renal calculi protocol (140 kV, 46 mAs; 80 kV, 210 mAs) was derived from the single-energy (SE) CT protocol used in our institution for the detection of renal calculi (120 kV, 75 mAs). An Alderson-Rando phantom was equipped with thermoluminescence dosimeters and examined by CT with both protocols. The effective doses were calculated. Fifty-one patients with suspected or known urinary calculus disease underwent DE-CT. DE analysis was performed if calculi were detected using a dedicated software tool. Results were compared to chemical analysis after invasive calculus extraction. An effective dose of 3.43 mSv (male) and 5.30 mSv (female) was measured in the phantom for the DE protocol (vs. 3.17/4.57 mSv for the SE protocol). Urinary calculi were found in 34 patients; in 28 patients, calculi were removed and analyzed (23 patients with calcified calculi, three with uric acid calculi, one with 2,8-dihyxdroxyadenine-calculi, one patient with a mixed struvite calculus). DE analysis was able to distinguish between calcified and non-calcified calculi in all cases. In conclusion, dual-energy urinary calculus analysis is effective also with a low-dose protocol. The protocol tested in this study reliably identified calcified urinary calculi in vivo. (orig.)

  14. Dual-energy CT for the characterization of urinary calculi: In vitro and in vivo evaluation of a low-dose scanning protocol.

    Science.gov (United States)

    Thomas, C; Patschan, O; Ketelsen, D; Tsiflikas, I; Reimann, A; Brodoefel, H; Buchgeister, M; Nagele, U; Stenzl, A; Claussen, C; Kopp, A; Heuschmid, M; Schlemmer, H-P

    2009-06-01

    The efficiency and radiation dose of a low-dose dual-energy (DE) CT protocol for the evaluation of urinary calculus disease were evaluated. A low-dose dual-source DE-CT renal calculi protocol (140 kV, 46 mAs; 80 kV, 210 mAs) was derived from the single-energy (SE) CT protocol used in our institution for the detection of renal calculi (120 kV, 75 mAs). An Alderson-Rando phantom was equipped with thermoluminescence dosimeters and examined by CT with both protocols. The effective doses were calculated. Fifty-one patients with suspected or known urinary calculus disease underwent DE-CT. DE analysis was performed if calculi were detected using a dedicated software tool. Results were compared to chemical analysis after invasive calculus extraction. An effective dose of 3.43 mSv (male) and 5.30 mSv (female) was measured in the phantom for the DE protocol (vs. 3.17/4.57 mSv for the SE protocol). Urinary calculi were found in 34 patients; in 28 patients, calculi were removed and analyzed (23 patients with calcified calculi, three with uric acid calculi, one with 2,8-dihyxdroxyadenine-calculi, one patient with a mixed struvite calculus). DE analysis was able to distinguish between calcified and non-calcified calculi in all cases. In conclusion, dual-energy urinary calculus analysis is effective also with a low-dose protocol. The protocol tested in this study reliably identified calcified urinary calculi in vivo.

  15. Dual-energy CT for the characterization of urinary calculi: In vitro and in vivo evaluation of a low-dose scanning protocol

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, C. [University of Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Patschan, O.; Nagele, U.; Stenzl, A. [University of Tuebingen, Department of Urology, Tuebingen (Germany); Ketelsen, D.; Tsiflikas, I.; Reimann, A.; Brodoefel, H.; Claussen, C.; Kopp, A.; Heuschmid, M.; Schlemmer, H.P. [University of Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Buchgeister, M. [University of Tuebingen, Medical Physics, Department of Radiation Oncology, Tuebingen (Germany)

    2009-06-15

    The efficiency and radiation dose of a low-dose dual-energy (DE) CT protocol for the evaluation of urinary calculus disease were evaluated. A low-dose dual-source DE-CT renal calculi protocol (140 kV, 46 mAs; 80 kV, 210 mAs) was derived from the single-energy (SE) CT protocol used in our institution for the detection of renal calculi (120 kV, 75 mAs). An Alderson-Rando phantom was equipped with thermoluminescence dosimeters and examined by CT with both protocols. The effective doses were calculated. Fifty-one patients with suspected or known urinary calculus disease underwent DE-CT. DE analysis was performed if calculi were detected using a dedicated software tool. Results were compared to chemical analysis after invasive calculus extraction. An effective dose of 3.43 mSv (male) and 5.30 mSv (female) was measured in the phantom for the DE protocol (vs. 3.17/4.57 mSv for the SE protocol). Urinary calculi were found in 34 patients; in 28 patients, calculi were removed and analyzed (23 patients with calcified calculi, three with uric acid calculi, one with 2,8-dihyxdroxyadenine-calculi, one patient with a mixed struvite calculus). DE analysis was able to distinguish between calcified and non-calcified calculi in all cases. In conclusion, dual-energy urinary calculus analysis is effective also with a low-dose protocol. The protocol tested in this study reliably identified calcified urinary calculi in vivo. (orig.)

  16. Measurement of radon in Spanish houses: characterization of its sources

    International Nuclear Information System (INIS)

    1998-01-01

    The determination of radon concentrations in-house are analyzed by different universities. The programs of Cantabria, Valencia, Barcelona and La Laguna Universities are presented. These programs study the environmental impact of radon in Barcelona and Madrid and characterize the radon sources

  17. Development and characterization of electron sources for diffraction applications

    International Nuclear Information System (INIS)

    Casandruc, Albert

    2015-12-01

    The dream to control chemical reactions that are essential to life is now closer than ever to gratify. Recent scientific progress has made it possible to investigate phenomena and processes which deploy at the angstroms scale and at rates on the order femtoseconds. Techniques such as Ultrafast Electron Diffraction (UED) are currently able to reveal the spatial atomic configuration of systems with unit cell sizes on the order of a few nanometers with about 100 femtosecond temporal resolution. Still, major advances are needed for structural interrogation of biological systems like protein crystals, which have unit cell sizes of 10 nanometers or larger, and sample sizes of less than one micrometer. For such samples, the performance of these electron-based techniques is now limited by the quality, in particular the brightness, of the electron source. The current Ph.D. work represents a contribution towards the development and the characterization of electron sources which are essential to static and time-resolved electron diffraction techniques. The focus was on electron source fabrication and electron beam characterization measurements, using the solenoid and the aperture scan techniques, but also on the development and maintenance of the relevant experimental setups. As a result, new experimental facilities are now available in the group and, at the same time, novel concepts for generating electron beams for electron diffraction applications have been developed. In terms of existing electron sources, the capability to trigger and detect field emission from single double-gated field emitter Mo tips was successfully proven. These sharp emitter tips promise high brightness electron beams, but for investigating individual such structures, new engineering was needed. Secondly, the influence of the surface electric field on electron beam properties has been systematically performed for flat Mo photocathodes. This study is very valuable especially for state

  18. Development and characterization of electron sources for diffraction applications

    Energy Technology Data Exchange (ETDEWEB)

    Casandruc, Albert

    2015-12-15

    The dream to control chemical reactions that are essential to life is now closer than ever to gratify. Recent scientific progress has made it possible to investigate phenomena and processes which deploy at the angstroms scale and at rates on the order femtoseconds. Techniques such as Ultrafast Electron Diffraction (UED) are currently able to reveal the spatial atomic configuration of systems with unit cell sizes on the order of a few nanometers with about 100 femtosecond temporal resolution. Still, major advances are needed for structural interrogation of biological systems like protein crystals, which have unit cell sizes of 10 nanometers or larger, and sample sizes of less than one micrometer. For such samples, the performance of these electron-based techniques is now limited by the quality, in particular the brightness, of the electron source. The current Ph.D. work represents a contribution towards the development and the characterization of electron sources which are essential to static and time-resolved electron diffraction techniques. The focus was on electron source fabrication and electron beam characterization measurements, using the solenoid and the aperture scan techniques, but also on the development and maintenance of the relevant experimental setups. As a result, new experimental facilities are now available in the group and, at the same time, novel concepts for generating electron beams for electron diffraction applications have been developed. In terms of existing electron sources, the capability to trigger and detect field emission from single double-gated field emitter Mo tips was successfully proven. These sharp emitter tips promise high brightness electron beams, but for investigating individual such structures, new engineering was needed. Secondly, the influence of the surface electric field on electron beam properties has been systematically performed for flat Mo photocathodes. This study is very valuable especially for state

  19. Diagnosis of pulmonary artery embolism. Comparison of single-source CT and 3rd generation dual-source CT using a dual-energy protocol regarding image quality and radiation dose

    International Nuclear Information System (INIS)

    Petritsch, Bernhard; Kosmala, Aleksander; Gassenmeier, Tobias; Weng, Andreas Max; Veldhoen, Simon; Kunz, Andreas Steven; Bley, Thorsten Alexander

    2017-01-01

    To compare radiation dose, subjective and objective image quality of 3 rd generation dual-source CT (DSCT) and dual-energy CT (DECT) with conventional 64-slice single-source CT (SSCT) for pulmonary CTA. 180 pulmonary CTA studies were performed in three patient cohorts of 60 patients each. Group 1: conventional SSCT 120 kV (ref.); group 2: single-energy DSCT 100 kV (ref.); group 3: DECT 90/Sn150 kV. CTDIvol, DLP, effective radiation dose were reported, and CT attenuation (HU) was measured on three central and peripheral levels. The signal-to-noise-ratio (SNR) and contrast-to-noise-ratio (CNR) were calculated. Two readers assessed subjective image quality according to a five-point scale. Mean CTDIvol and DLP were significantly lower in the dual-energy group compared to the SSCT group (p < 0.001 [CTDIvol]; p < 0.001 [DLP]) and the DSCT group (p = 0.003 [CTDIvol]; p = 0.003 [DLP]), respectively. The effective dose in the DECT group was 2.79 ± 0.95 mSv and significantly smaller than in the SSCT group (4.60 ± 1.68 mSv, p < 0.001) and the DSCT group (4.24 ± 2.69 mSv, p = 0.003). The SNR and CNR were significantly higher in the DSCT group (p < 0.001). Subjective image quality did not differ significantly among the three protocols and was rated good to excellent in 75 % (135/180) of cases with an inter-observer agreement of 80 %. Dual-energy pulmonary CTA protocols of 3 rd generation dual-source scanners allow for significant reduction of radiation dose while providing excellent image quality and potential additional information by means of perfusion maps. Dual-energy CT with 90/Sn150 kV configuration allows for significant dose reduction in pulmonary CTA. Subjective image quality was similar among the three evaluated CT-protocols (64-slice SSCT, single-energy DSCT, 90/Sn150 kV DECT) and was rated good to excellent in 75% of cases. Dual-energy CT provides potential additional information by means of iodine distribution maps.

  20. Radiological Characterization Technical Report on Californium-252 Sealed Source Transuranic Debris Waste for the Off-Site Source Recovery Project at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, Alexander [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-04-24

    This document describes the development and approach for the radiological characterization of Cf-252 sealed sources for shipment to the Waste Isolation Pilot Plant. The report combines information on the nuclear material content of each individual source (mass or activity and date of manufacture) with information and data on the radionuclide distributions within the originating nuclear material. This approach allows for complete and accurate characterization of the waste container without the need to take additional measurements. The radionuclide uncertainties, developed from acceptable knowledge (AK) information regarding the source material, are applied to the summed activities in the drum. The AK information used in the characterization of Cf-252 sealed sources has been qualified by the peer review process, which has been reviewed and accepted by the Environmental Protection Agency.

  1. Characterization of the near-source population around five ...

    Science.gov (United States)

    Many ports are currently preparing for increased freight traffic, which may result in elevated local air pollution in areas near the port and freight transportation corridors. In this study, a geographical information system (GIS) analysis of areas surrounding five ports – Port of New York and New Jersey, Port of Virginia, Port of Savannah, Port of Miami, and Port of Houston – was conducted to characterize the population that might be affected by air emissions from the freight transportation network and to determine which sources had the potential to affect the most people. Defining “near-source” populations as living within 300 m of the freight transportation network, namely the port and associated truck routes, railroads, and intermodal facilities (e. g. rail yards and warehouses); near-source populations ranged from 37,000 to over a million within 10 km of a port. At the ports considered, the population living within 300 m of the port boundary constituted of the total near-source population. Sensitive population exposure was also indicated, such as the 81 day care centers and K-12 schools in near-source environments within 2 km of the Port of New York and New Jersey. Minority groups constituted 55 % to 85 % of the near-source populations in the five port areas. For four of the five ports, the mean and median income of the near-source population was lower and the minority percentage was higher than the population living adjacent to the near-sou

  2. SU-F-BRA-09: New Efficient Method for Xoft Axxent Electronic Brachytherapy Source Calibration by Pre-Characterizing Surface Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Pai, S [iCAD Inc., Los Gatos, CA (United States)

    2015-06-15

    Purpose: The objective is to improve the efficiency and efficacy of Xoft™ Axxent™ electronic brachytherapy (EBT) calibration of the source & surface applicator using AAPM TG-61 formalism. Methods: Current method of Xoft EBT source calibration involves determination of absolute dose rate of the source in each of the four conical surface applicators using in-air chamber measurements & TG61 formalism. We propose a simplified TG-61 calibration methodology involving initial characterization of surface cone applicators. This is accomplished by calibrating dose rates for all 4 surface applicator sets (for 10 sources) which establishes the “applicator output ratios” with respect to the selected reference applicator (20 mm applicator). After the initial time, Xoft™ Axxent™ source TG61 Calibration is carried out only in the reference applicator. Using the established applicator output ratios, dose rates for other applicators will be calculated. Results: 200 sources & 8 surface applicator sets were calibrated cumulatively using a Standard Imaging A20 ion-chamber in accordance with manufacturer-recommended protocols. Dose rates of 10, 20, 35 & 50mm applicators were normalized to the reference (20mm) applicator. The data in Figure 1 indicates that the normalized dose rate variation for each applicator for all 200 sources is better than ±3%. The average output ratios are 1.11, 1.02 and 0.49 for the 10 mm,35 mm and 50 mm applicators, respectively, which are in good agreement with the manufacturer’s published output ratios of 1.13, 1.02 and 0.49. Conclusion: Our measurements successfully demonstrate the accuracy of a new calibration method using a single surface applicator for Xoft EBT sources and deriving the dose rates of other applicators. The accuracy of the calibration is improved as this method minimizes the source position variation inside the applicator during individual source calibrations. The new method significantly reduces the calibration time to less

  3. Characterization of Greater-Than-Class C sealed sources. Volume 3, Sealed sources held by general licensees

    International Nuclear Information System (INIS)

    Harris, G.

    1994-09-01

    This is the third volume in a series of three volumes characterizing the population of sealed sources that may become greater-than-Class C low-level radioactive waste (GTCC LLW). In this volume, those sources possessed by general licensees are discussed. General-licensed devices may contain sealed sources with significant amounts of radioactive material. However, the devices are designed to be safe to use without special knowledge of radiological safety practices. Devices containing Am-241 or Cm-244 sources are most likely to become GTCC LLW after concentration averaging. This study estimates that there are about 16,000 GTCC devices held by general licensees; 15,000 of these contain Am-241 sources and 1,000 contain Cm-244 sources. Additionally, this study estimates that there are 1,600 GTCC devices sold to general licensees each year. However, due to a lack of available information on general licensees in Agreement States, these estimates are uncertain. This uncertainty is quantified in the low and high case estimates given in this report, which span approximately an order of magnitude

  4. Game-theoretic perspective of Ping-Pong protocol

    Science.gov (United States)

    Kaur, Hargeet; Kumar, Atul

    2018-01-01

    We analyse Ping-Pong protocol from the point of view of a game. The analysis helps us in understanding the different strategies of a sender and an eavesdropper to gain the maximum payoff in the game. The study presented here characterizes strategies that lead to different Nash equilibriums. We further demonstrate the condition for Pareto optimality depending on the parameters used in the game. Moreover, we also analysed LM05 protocol and compared it with PP protocol from the point of view of a generic two-way QKD game with or without entanglement. Our results provide a deeper understanding of general two-way QKD protocols in terms of the security and payoffs of different stakeholders in the protocol.

  5. Fecal bacteria source characterization and sensitivity analysis of SWAT 2005

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) version 2005 includes a microbial sub-model to simulate fecal bacteria transport at the watershed scale. The objectives of this study were to demonstrate methods to characterize fecal coliform bacteria (FCB) source loads and to assess the model sensitivity t...

  6. Diagnosis of pulmonary artery embolism. Comparison of single-source CT and 3{sup rd} generation dual-source CT using a dual-energy protocol regarding image quality and radiation dose

    Energy Technology Data Exchange (ETDEWEB)

    Petritsch, Bernhard; Kosmala, Aleksander; Gassenmeier, Tobias; Weng, Andreas Max; Veldhoen, Simon; Kunz, Andreas Steven; Bley, Thorsten Alexander [Univ. Hospital Wuerzburg (Germany). Inst. of Diagnostic and Interventional Radiology

    2017-06-15

    To compare radiation dose, subjective and objective image quality of 3 rd generation dual-source CT (DSCT) and dual-energy CT (DECT) with conventional 64-slice single-source CT (SSCT) for pulmonary CTA. 180 pulmonary CTA studies were performed in three patient cohorts of 60 patients each. Group 1: conventional SSCT 120 kV (ref.); group 2: single-energy DSCT 100 kV (ref.); group 3: DECT 90/Sn150 kV. CTDIvol, DLP, effective radiation dose were reported, and CT attenuation (HU) was measured on three central and peripheral levels. The signal-to-noise-ratio (SNR) and contrast-to-noise-ratio (CNR) were calculated. Two readers assessed subjective image quality according to a five-point scale. Mean CTDIvol and DLP were significantly lower in the dual-energy group compared to the SSCT group (p < 0.001 [CTDIvol]; p < 0.001 [DLP]) and the DSCT group (p = 0.003 [CTDIvol]; p = 0.003 [DLP]), respectively. The effective dose in the DECT group was 2.79 ± 0.95 mSv and significantly smaller than in the SSCT group (4.60 ± 1.68 mSv, p < 0.001) and the DSCT group (4.24 ± 2.69 mSv, p = 0.003). The SNR and CNR were significantly higher in the DSCT group (p < 0.001). Subjective image quality did not differ significantly among the three protocols and was rated good to excellent in 75 % (135/180) of cases with an inter-observer agreement of 80 %. Dual-energy pulmonary CTA protocols of 3 rd generation dual-source scanners allow for significant reduction of radiation dose while providing excellent image quality and potential additional information by means of perfusion maps. Dual-energy CT with 90/Sn150 kV configuration allows for significant dose reduction in pulmonary CTA. Subjective image quality was similar among the three evaluated CT-protocols (64-slice SSCT, single-energy DSCT, 90/Sn150 kV DECT) and was rated good to excellent in 75% of cases. Dual-energy CT provides potential additional information by means of iodine distribution maps.

  7. SPP: A data base processor data communications protocol

    Science.gov (United States)

    Fishwick, P. A.

    1983-01-01

    The design and implementation of a data communications protocol for the Intel Data Base Processor (DBP) is defined. The protocol is termed SPP (Service Port Protocol) since it enables data transfer between the host computer and the DBP service port. The protocol implementation is extensible in that it is explicitly layered and the protocol functionality is hierarchically organized. Extensive trace and performance capabilities have been supplied with the protocol software to permit optional efficient monitoring of the data transfer between the host and the Intel data base processor. Machine independence was considered to be an important attribute during the design and implementation of SPP. The protocol source is fully commented and is included in Appendix A of this report.

  8. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Characterization of rotary-percussion drilling as a seismic-while-drilling source

    Science.gov (United States)

    Xiao, Yingjian; Hurich, Charles; Butt, Stephen D.

    2018-04-01

    This paper focuses on an evaluation of rotary-percussion drilling (RPD) as a seismic source. Two field experiments were conducted to characterize seismic sources from different rocks with different strengths, i.e. weak shale and hard arkose. Characterization of RPD sources consist of spectral analysis and mean power measurements, along with field measurements of the source radiation patterns. Spectral analysis shows that increase of rock strength increases peak frequency and widens bandwidth, which makes harder rock more viable for seismic-while-drilling purposes. Mean power analysis infers higher magnitude of body waves in RPD than in conventional drillings. Within the horizontal plane, the observed P-wave energy radiation pattern partially confirms the theoretical radiation pattern under a single vertical bit vibration. However a horizontal lobe of energy is observed close to orthogonal to the axial bit vibration. From analysis, this lobe is attributed to lateral bit vibration, which is not documented elsewhere during RPD. Within the horizontal plane, the observed radiation pattern of P-waves is generally consistent with a spherically-symmetric distribution of energy. In addition, polarization analysis is conducted on P-waves recorded at surface geophones for understanding the particle motions. P-wave particle motions are predominantly in the vertical direction showing the interference of the free-surface.

  10. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  11. Characterization and source apportionment of water pollution in Jinjiang River, China.

    Science.gov (United States)

    Chen, Haiyang; Teng, Yanguo; Yue, Weifeng; Song, Liuting

    2013-11-01

    Characterizing water quality and identifying potential pollution sources could greatly improve our knowledge about human impacts on the river ecosystem. In this study, fuzzy comprehensive assessment (FCA), pollution index (PI), principal component analysis (PCA), and absolute principal component score-multiple linear regression (APCS-MLR) were combined to obtain a deeper understanding of temporal-spatial characterization and sources of water pollution with a case study of the Jinjiang River, China. Measurement data were obtained with 17 water quality variables from 20 sampling sites in the December 2010 (withered water period) and June 2011 (high flow period). FCA and PI were used to comprehensively estimate the water quality variables and compare temporal-spatial variations, respectively. Rotated PCA and receptor model (APCS-MLR) revealed potential pollution sources and their corresponding contributions. Application results showed that comprehensive application of various multivariate methods were effective for water quality assessment and management. In the withered water period, most sampling sites were assessed as low or moderate pollution with characteristics pollutants of permanganate index and total nitrogen (TN), whereas 90% sites were classified as high pollution in the high flow period with higher TN and total phosphorus. Agricultural non-point sources, industrial wastewater discharge, and domestic sewage were identified as major pollution sources. Apportionment results revealed that most variables were complicatedly influenced by industrial wastewater discharge and agricultural activities in withered water period and primarily dominated by agricultural runoff in high flow period.

  12. Characterization of the radiation background at the Spallation Neutron Source

    International Nuclear Information System (INIS)

    DiJulio, Douglas D.; Cherkashyna, Nataliia; Scherzinger, Julius; Khaplanov, Anton; Pfeiffer, Dorothea; Cooper-Jensen, Carsten P.; Fissum, Kevin G.; Kanaki, Kalliopi; Kirstein, Oliver; Hall-Wilton, Richard J.; Bentley, Phillip M.; Ehlers, Georg; Gallmeier, Franz X.; Hornbach, Donald E.; Iverson, Erik B.; Newby, Robert J.

    2016-01-01

    We present a survey of the radiation background at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory, TN, USA during routine daily operation. A broad range of detectors was used to characterize primarily the neutron and photon fields throughout the facility. These include a WENDI-2 extended range dosimeter, a thermoscientific NRD, an Arktis 4 He detector, and a standard NaI photon detector. The information gathered from the detectors was used to map out the neutron dose rates throughout the facility and also the neutron dose rate and flux profiles of several different beamlines. The survey provides detailed information useful for developing future shielding concepts at spallation neutron sources, such as the European Spallation Source (ESS), currently under construction in Lund, Sweden. (paper)

  13. X-ray intensity and source size characterizations for the 25 kV upgraded Manson source at Sandia National Laboratories

    Energy Technology Data Exchange (ETDEWEB)

    Loisel, G., E-mail: gploise@sandia.gov; Lake, P.; Gard, P.; Dunham, G.; Nielsen-Weber, L.; Wu, M. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States); Norris, E. [Missouri University of Science and Technology, Rolla, Missouri 65409 (United States)

    2016-11-15

    At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As the voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.

  14. X-ray intensity and source size characterizations for the 25 kV upgraded Manson source at Sandia National Laboratories.

    Science.gov (United States)

    Loisel, G; Lake, P; Gard, P; Dunham, G; Nielsen-Weber, L; Wu, M; Norris, E

    2016-11-01

    At Sandia National Laboratories, the x-ray generator Manson source model 5 was upgraded from 10 to 25 kV. The purpose of the upgrade is to drive higher characteristics photon energies with higher throughput. In this work we present characterization studies for the source size and the x-ray intensity when varying the source voltage for a series of K-, L-, and M-shell lines emitted from Al, Y, and Au elements composing the anode. We used a 2-pinhole camera to measure the source size and an energy dispersive detector to monitor the spectral content and intensity of the x-ray source. As the voltage increases, the source size is significantly reduced and line intensity is increased for the three materials. We can take advantage of the smaller source size and higher source throughput to effectively calibrate the suite of Z Pulsed Power Facility crystal spectrometers.

  15. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within

  16. Quantum key distribution with an unknown and untrusted source

    Science.gov (United States)

    Zhao, Yi; Qi, Bing; Lo, Hoi-Kwong

    2009-03-01

    The security of a standard bi-directional ``plug & play'' quantum key distribution (QKD) system has been an open question for a long time. This is mainly because its source is equivalently controlled by an eavesdropper, which means the source is unknown and untrusted. Qualitative discussion on this subject has been made previously. In this paper, we present the first quantitative security analysis on a general class of QKD protocols whose sources are unknown and untrusted. The securities of standard BB84 protocol, weak+vacuum decoy state protocol, and one-decoy decoy state protocol, with unknown and untrusted sources are rigorously proved. We derive rigorous lower bounds to the secure key generation rates of the above three protocols. Our numerical simulation results show that QKD with an untrusted source gives a key generation rate that is close to that with a trusted source. Our work is published in [1]. [4pt] [1] Y. Zhao, B. Qi, and H.-K. Lo, Phys. Rev. A, 77:052327 (2008).

  17. Surface, interface and bulk materials characterization using Indus synchrotron sources

    International Nuclear Information System (INIS)

    Phase, Deodatta M.

    2014-01-01

    Synchrotron radiation sources, providing intense, polarized and stable beams of ultra violet, soft and hard x-ray photons, are having great impact on physics, chemistry, biology, materials science and other areas research. In particular synchrotron radiation has revolutionized materials characterization techniques by enhancing its capabilities for investigating the structural, electronic and magnetic properties of solids. The availability of synchrotron sources and necessary instrumentation has led to considerable improvements in spectral resolution and intensities. As a result, application scope of different materials characterization techniques has tremendously increased particularly in the analysis of solid surfaces, interfaces and bulk materials. The Indian synchrotron storage ring, Indus-1 and Indus-2 are in operation at RRCAT, Indore. The UGC-DAE CSR with the help of university scientist had designed and developed an angle integrated photoelectron spectroscopy (AlPES) beam line on Indus-1 storage ring of 450 MeV and polarized light beam line for soft x-ray absorption spectroscopy (SXAS) on Indus-2 storage ring of 2.5 GeV. (author)

  18. Characterization of a alpha particle detector CR-39 exposed to a source of radium

    International Nuclear Information System (INIS)

    Maino, Leandro Marcondes

    2009-01-01

    In this project, the main goal is the characterization of a alpha particle detector CR-39 exposed to a source of radio. Three detectors were exposed to a source of radium and then chemically treated for different periods. This way, we could analyze these samples and collect the information needed to verify that at least one of the chemical attack, there has been a separation of the energies alpha particles incident with distinct peaks, thus characterizing the CR-39 as alpha spectrometer in the range 2.5 to 6.3 MeV . (author)

  19. Stress Wave Source Characterization: Impact, Fracture, and Sliding Friction

    Science.gov (United States)

    McLaskey, Gregory Christofer

    Rapidly varying forces, such as those associated with impact, rapid crack propagation, and fault rupture, are sources of stress waves which propagate through a solid body. This dissertation investigates how properties of a stress wave source can be identified or constrained using measurements recorded at an array of sensor sites located far from the source. This methodology is often called the method of acoustic emission and is useful for structural health monitoring and the noninvasive study of material behavior such as friction and fracture. In this dissertation, laboratory measurements of 1--300 mm wavelength stress waves are obtained by means of piezoelectric sensors which detect high frequency (10 kHz--3MHz) motions of a specimen's surface, picometers to nanometers in amplitude. Then, stress wave source characterization techniques are used to study ball impact, drying shrinkage cracking in concrete, and the micromechanics of stick-slip friction of Poly(methyl methacrylate) (PMMA) and rock/rock interfaces. In order to quantitatively relate recorded signals obtained with an array of sensors to a particular stress wave source, wave propagation effects and sensor distortions must be accounted for. This is achieved by modeling the physics of wave propagation and transduction as linear transfer functions. Wave propagation effects are precisely modeled by an elastodynamic Green's function, sensor distortion is characterized by an instrument response function, and the stress wave source is represented with a force moment tensor. These transfer function models are verified though calibration experiments which employ two different mechanical calibration sources: ball impact and glass capillary fracture. The suitability of the ball impact source model, based on Hertzian contact theory, is experimentally validated for small (˜1 mm) balls impacting massive plates composed of four different materials: aluminum, steel, glass, and PMMA. Using this transfer function approach

  20. Comparison of Protein Extracts from Various Unicellular Green Sources.

    Science.gov (United States)

    Teuling, Emma; Wierenga, Peter A; Schrama, Johan W; Gruppen, Harry

    2017-09-13

    Photosynthetic unicellular organisms are considered as promising alternative protein sources. The aim of this study is to understand the extent to which these green sources differ with respect to their gross composition and how these differences affect the final protein isolate. Using mild isolation techniques, proteins were extracted and isolated from four different unicellular sources (Arthrospira (spirulina) maxima, Nannochloropsis gaditana, Tetraselmis impellucida, and Scenedesmus dimorphus). Despite differences in protein contents of the sources (27-62% w/w) and in protein extractability (17-74% w/w), final protein isolates were obtained that had similar protein contents (62-77% w/w) and protein yields (3-9% w/w). Protein solubility as a function of pH was different between the sources and in ionic strength dependency, especially at pH < 4.0. Overall, the characterization and extraction protocol used allows a relatively fast and well-described isolation of purified proteins from novel protein sources.

  1. Compact wireless control network protocol with fast path switching

    Directory of Open Access Journals (Sweden)

    Yasutaka Kawamoto

    2017-08-01

    Full Text Available Sensor network protocol stacks require the addition or adjustment of functions based on customer requirements. Sensor network protocols that require low delay and low packet error rate (PER, such as wireless control networks, often adopt time division multiple access (TDMA. However, it is difficult to add or adjust functions in protocol stacks that use TDMA methods. Therefore, to add or adjust functions easily, we propose NES-SOURCE, a compact wireless control network protocol with a fast path-switching function. NES-SOURCE is implemented using carrier sense multiple access/collision avoidance (CSMA/CA rather than TDMA. Wireless control networks that use TDMA prevent communication failure by duplicating the communication path. If CSMA/CA networks use duplicate paths, collisions occur frequently, and communication will fail. NES-SOURCE switches paths quickly when communication fails, which reduces the effect of communication failures. Since NES-SOURCE is implemented using CSMA/CA rather than TDMA, the implementation scale is less than one-half that of existing network stacks. Furthermore, since NES-SOURCE’s code complexity is low, functions can be added or adjusted easily and quickly. Communication failures occur owing to changes in the communication environment and collisions. Experimental results demonstrate that the proposed NES-SOURCE’s path-switching function reduces the amount of communication failures when the communication environment changes owing to human movement and others. Furthermore, we clarify the relationships among the probability of a changing communication environment, the collision occurrence rate, and the PER of NES-SOURCE.

  2. Separable states improve protocols with finite randomness

    International Nuclear Information System (INIS)

    Bobby, Tan Kok Chuan; Paterek, Tomasz

    2014-01-01

    It is known from Bell's theorem that quantum predictions for some entangled states cannot be mimicked using local hidden variable (LHV) models. From a computer science perspective, LHV models may be interpreted as classical computers operating on a potentially infinite number of correlated bits originating from a common source. As such, Bell inequality violations achieved through entangled states are able to characterize the quantum advantage of certain tasks, so long as the task itself imposes no restriction on the availability of correlated bits. However, if the number of shared bits is limited, additional constraints are placed on the possible LHV models, and separable, i.e. disentangled states may become a useful resource. Bell violations are therefore no longer necessary to achieve a quantum advantage. Here we show that, in particular, separable states improve the so-called random access codes, which is a class of communication problem wherein one party tries to read a portion of the data held by another distant party in the presence of finite shared randomness and limited classical communication. We also show how the bias of classical bits can be used to avoid wrong answers in order to achieve the optimal classical protocol and how the advantage of quantum protocols is linked to quantum discord. (paper)

  3. Characterizing the oxygen isotopic composition of phosphate sources to aquatic ecosystems

    Science.gov (United States)

    Young, M.B.; McLaughlin, K.; Kendall, C.; Stringfellow, W.; Rollog, M.; Elsbury, K.; Donald, E.; Paytan, A.

    2009-01-01

    The oxygen isotopic composition of dissolved inorganic phosphate (δ18Op) in many aquatic ecosystems is not in isotopic equilibrium with ambient water and, therefore, may reflect the source δ18Op. Identification of phosphate sources to water bodies is critical for designing best management practices for phosphate load reduction to control eutrophication. In order for δ18O p to be a useful tool for source tracking, the δ18Op of phosphate sources must be distinguishable from one another; however, the δ18Op of potential sources has not been well characterized. We measured the δ18O p of a variety of known phosphate sources, including fertilizers, semiprocessed phosphorite ore, particulate aerosols, detergents, leachates of vegetation, soil, animal feces, and wastewater treatment plant effluent. We found a considerable range of δ18Op values (from +8.4 to +24.9‰) for the various sources, and statistically significant differences were found between several of the source types. δ18Op measured in three different fresh water systems was generally not in equilibrium with ambient water. Although there is overlap in δ18Op values among the groups of samples, our results indicate that some sources are isotopically distinct and δ18Op can be used for identifying phosphate sources to aquatic systems.

  4. Hiding the Source Based on Limited Flooding for Sensor Networks.

    Science.gov (United States)

    Chen, Juan; Lin, Zhengkui; Hu, Ying; Wang, Bailing

    2015-11-17

    Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  5. Pseudodynamic Source Characterization for Strike-Slip Faulting Including Stress Heterogeneity and Super-Shear Ruptures

    KAUST Repository

    Mena, B.

    2012-08-08

    Reliable ground‐motion prediction for future earthquakes depends on the ability to simulate realistic earthquake source models. Though dynamic rupture calculations have recently become more popular, they are still computationally demanding. An alternative is to invoke the framework of pseudodynamic (PD) source characterizations that use simple relationships between kinematic and dynamic source parameters to build physically self‐consistent kinematic models. Based on the PD approach of Guatteri et al. (2004), we propose new relationships for PD models for moderate‐to‐large strike‐slip earthquakes that include local supershear rupture speed due to stress heterogeneities. We conduct dynamic rupture simulations using stochastic initial stress distributions to generate a suite of source models in the magnitude Mw 6–8. This set of models shows that local supershear rupture speed prevails for all earthquake sizes, and that the local rise‐time distribution is not controlled by the overall fault geometry, but rather by local stress changes on the faults. Based on these findings, we derive a new set of relations for the proposed PD source characterization that accounts for earthquake size, buried and surface ruptures, and includes local rise‐time variations and supershear rupture speed. By applying the proposed PD source characterization to several well‐recorded past earthquakes, we verify that significant improvements in fitting synthetic ground motion to observed ones is achieved when comparing our new approach with the model of Guatteri et al. (2004). The proposed PD methodology can be implemented into ground‐motion simulation tools for more physically reliable prediction of shaking in future earthquakes.

  6. Exploring REACH as a potential data source for characterizing ecotoxicity in life cycle assessment

    DEFF Research Database (Denmark)

    Müller, Nienke; de Zwart, Dick; Hauschild, Michael Zwicky

    2017-01-01

    Toxicity models in life cycle impact assessment (LCIA) currently only characterize a small fraction of marketed substances, mostly because of limitations in the underlying ecotoxicity data. One approach to improve the current data situation in LCIA is to identify new data sources, such as the Eur......Toxicity models in life cycle impact assessment (LCIA) currently only characterize a small fraction of marketed substances, mostly because of limitations in the underlying ecotoxicity data. One approach to improve the current data situation in LCIA is to identify new data sources......, such as the European Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH) database. The present study explored REACH as a potential data source for LCIA based on matching reported ecotoxicity data for substances that are currently also included in the United Nations Environment Programme....../Society for Environmental Toxicology and Chemistry (UNEP/SETAC) scientific consensus model USEtox for characterizing toxicity impacts. Data are evaluated with respect to number of data points, reported reliability, and test duration, and are compared with data listed in USEtox at the level of hazardous concentration for 50...

  7. An Improved PRoPHET Routing Protocol in Delay Tolerant Network

    Directory of Open Access Journals (Sweden)

    Seung Deok Han

    2015-01-01

    Full Text Available In delay tolerant network (DTN, an end-to-end path is not guaranteed and packets are delivered from a source node to a destination node via store-carry-forward based routing. In DTN, a source node or an intermediate node stores packets in buffer and carries them while it moves around. These packets are forwarded to other nodes based on predefined criteria and finally are delivered to a destination node via multiple hops. In this paper, we improve the dissemination speed of PRoPHET (probability routing protocol using history of encounters and transitivity protocol by employing epidemic protocol for disseminating message m, if forwarding counter and hop counter values are smaller than or equal to the threshold values. The performance of the proposed protocol was analyzed from the aspect of delivery probability, average delay, and overhead ratio. Numerical results show that the proposed protocol can improve the delivery probability, average delay, and overhead ratio of PRoPHET protocol by appropriately selecting the threshold forwarding counter and threshold hop counter values.

  8. Characterization of the plasma-switch interaction in the LBL HIF ion source

    International Nuclear Information System (INIS)

    Hewett, D.W.; Rutkowski, H.L.

    1990-01-01

    A new way to characterize the performance of the LBL HIF ion source has been found. In the LBL source, ions are drawn from an arc-generated plasma reservoir in which the electrons are confined by a negative-biased ''switch'' mesh. Stagnation of the plasma is prevented by absorption of the excess ion flow on this mesh. The ion beam is generated by an external negative voltage that provides Child-Langmuir extraction of the ions through the switch mesh. We elucidate the physics requirements of the source and deduce switch mesh parameters needed for successful operation. 2 refs., 2 figs

  9. Nucleic acid protocols: Extraction and optimization

    Directory of Open Access Journals (Sweden)

    Saeed El-Ashram

    2016-12-01

    Full Text Available Yield and quality are fundamental features for any researchers during nucleic acid extraction. Here, we describe a simplified, semi-unified, effective, and toxic material free protocol for extracting DNA and RNA from different prokaryotic and eukaryotic sources exploiting the physical and chemical properties of nucleic acids. Furthermore, this protocol showed that DNA and RNA are under triple protection (i.e. EDTA, SDS and NaCl during lysis step, and this environment is improper for RNase to have DNA liberated of RNA and even for DNase to degrade the DNA. Therefore, the complete removal of RNA under RNase influence is achieved when RNase is added after DNA extraction, which gives optimal quality with any protocols. Similarly, DNA contamination in an isolated RNA is degraded by DNase to obtain high-quality RNA. Our protocol is the protocol of choice in terms of simplicity, recovery time, environmental safety, amount, purity, PCR and RT-PCR applicability.

  10. Characterization of radioactive orphan sources by gamma spectrometry; Caracterizacion de fuentes huerfanas radiactivas por espectrometria gamma

    Energy Technology Data Exchange (ETDEWEB)

    Cruz W, H., E-mail: wcruz@ipen.gob.pe [Instituto Peruano de Energia Nuclear (PGRR/IPEN), Lima (Peru). Planta de Gestion de Residuos Radiactivos

    2013-07-01

    The sealed radioactive sources are widely applicable in industry. They must have a permanent control and must be registered with the Technical Office of the National Authority (OTAN). However, at times it has identified the presence of abandoned sealed sources unknown to the owner. These sources are called 'orphan sources'. Of course these sources represent a high potential risk because accidents can trigger dire consequences depending on your activity and chemical form in which it presents the radioisotope. This paper describes the process and the actions taken to characterize two orphan radioactive sources from the smelter a Aceros Arequipa. For characterization we used a gamma spectrometry system using a detector NaI(Tl) 3″ x 3″ with a multichannel analyzer Nucleus PCA-II. The radioisotope identified was cesium - 137 ({sup 137}Cs) in both cases. Fortunately, the sources maintained their integrity would otherwise have generated significant pollution considering the chemical form of the radioisotope and easy dispersion. (author)

  11. Hiding the Source Based on Limited Flooding for Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan Chen

    2015-11-01

    Full Text Available Wireless sensor networks are widely used to monitor valuable objects such as rare animals or armies. Once an object is detected, the source, i.e., the sensor nearest to the object, generates and periodically sends a packet about the object to the base station. Since attackers can capture the object by localizing the source, many protocols have been proposed to protect source location. Instead of transmitting the packet to the base station directly, typical source location protection protocols first transmit packets randomly for a few hops to a phantom location, and then forward the packets to the base station. The problem with these protocols is that the generated phantom locations are usually not only near the true source but also close to each other. As a result, attackers can easily trace a route back to the source from the phantom locations. To address the above problem, we propose a new protocol for source location protection based on limited flooding, named SLP. Compared with existing protocols, SLP can generate phantom locations that are not only far away from the source, but also widely distributed. It improves source location security significantly with low communication cost. We further propose a protocol, namely SLP-E, to protect source location against more powerful attackers with wider fields of vision. The performance of our SLP and SLP-E are validated by both theoretical analysis and simulation results.

  12. Molecular characterization of bacteriophages for microbial source tracking in Korea.

    Science.gov (United States)

    Lee, Jung Eun; Lim, Mi Young; Kim, Sei Yoon; Lee, Sunghee; Lee, Heetae; Oh, Hyun-Myung; Hur, Hor-Gil; Ko, Gwangpyo

    2009-11-01

    We investigated coliphages from various fecal sources, including humans and animals, for microbial source tracking in South Korea. Both somatic and F+-specific coliphages were isolated from 43 fecal samples from farms, wild animal habitats, and human wastewater plants. Somatic coliphages were more prevalent and abundant than F+ coliphages in all of the tested fecal samples. We further characterized 311 F+ coliphage isolates using RNase sensitivity assays, PCR and reverse transcription-PCR, and nucleic acid sequencing. Phylogenetic analyses were performed based on the partial nucleic acid sequences of 311 F+ coliphages from various sources. F+ RNA coliphages were most prevalent among geese (95%) and were least prevalent in cows (5%). Among the genogroups of F+ RNA coliphages, most F+ coliphages isolated from animal fecal sources belonged to either group I or group IV, and most from human wastewater sources were in group II or III. Some of the group I coliphages were present in both human and animal source samples. F+ RNA coliphages isolated from various sources were divided into two main clusters. All F+ RNA coliphages isolated from human wastewater were grouped with Qbeta-like phages, while phages isolated from most animal sources were grouped with MS2-like phages. UniFrac significance statistical analyses revealed significant differences between human and animal bacteriophages. In the principal coordinate analysis (PCoA), F+ RNA coliphages isolated from human waste were distinctively separate from those isolated from other animal sources. However, F+ DNA coliphages were not significantly different or separate in the PCoA. These results demonstrate that proper analysis of F+ RNA coliphages can effectively distinguish fecal sources.

  13. In silico toxicology protocols.

    Science.gov (United States)

    Myatt, Glenn J; Ahlberg, Ernst; Akahori, Yumi; Allen, David; Amberg, Alexander; Anger, Lennart T; Aptula, Aynur; Auerbach, Scott; Beilke, Lisa; Bellion, Phillip; Benigni, Romualdo; Bercu, Joel; Booth, Ewan D; Bower, Dave; Brigo, Alessandro; Burden, Natalie; Cammerer, Zoryana; Cronin, Mark T D; Cross, Kevin P; Custer, Laura; Dettwiler, Magdalena; Dobo, Krista; Ford, Kevin A; Fortin, Marie C; Gad-McDonald, Samantha E; Gellatly, Nichola; Gervais, Véronique; Glover, Kyle P; Glowienke, Susanne; Van Gompel, Jacky; Gutsell, Steve; Hardy, Barry; Harvey, James S; Hillegass, Jedd; Honma, Masamitsu; Hsieh, Jui-Hua; Hsu, Chia-Wen; Hughes, Kathy; Johnson, Candice; Jolly, Robert; Jones, David; Kemper, Ray; Kenyon, Michelle O; Kim, Marlene T; Kruhlak, Naomi L; Kulkarni, Sunil A; Kümmerer, Klaus; Leavitt, Penny; Majer, Bernhard; Masten, Scott; Miller, Scott; Moser, Janet; Mumtaz, Moiz; Muster, Wolfgang; Neilson, Louise; Oprea, Tudor I; Patlewicz, Grace; Paulino, Alexandre; Lo Piparo, Elena; Powley, Mark; Quigley, Donald P; Reddy, M Vijayaraj; Richarz, Andrea-Nicole; Ruiz, Patricia; Schilter, Benoit; Serafimova, Rositsa; Simpson, Wendy; Stavitskaya, Lidiya; Stidl, Reinhard; Suarez-Rodriguez, Diana; Szabo, David T; Teasdale, Andrew; Trejo-Martin, Alejandra; Valentin, Jean-Pierre; Vuorinen, Anna; Wall, Brian A; Watts, Pete; White, Angela T; Wichard, Joerg; Witt, Kristine L; Woolley, Adam; Woolley, David; Zwickl, Craig; Hasselgren, Catrin

    2018-04-17

    The present publication surveys several applications of in silico (i.e., computational) toxicology approaches across different industries and institutions. It highlights the need to develop standardized protocols when conducting toxicity-related predictions. This contribution articulates the information needed for protocols to support in silico predictions for major toxicological endpoints of concern (e.g., genetic toxicity, carcinogenicity, acute toxicity, reproductive toxicity, developmental toxicity) across several industries and regulatory bodies. Such novel in silico toxicology (IST) protocols, when fully developed and implemented, will ensure in silico toxicological assessments are performed and evaluated in a consistent, reproducible, and well-documented manner across industries and regulatory bodies to support wider uptake and acceptance of the approaches. The development of IST protocols is an initiative developed through a collaboration among an international consortium to reflect the state-of-the-art in in silico toxicology for hazard identification and characterization. A general outline for describing the development of such protocols is included and it is based on in silico predictions and/or available experimental data for a defined series of relevant toxicological effects or mechanisms. The publication presents a novel approach for determining the reliability of in silico predictions alongside experimental data. In addition, we discuss how to determine the level of confidence in the assessment based on the relevance and reliability of the information. Copyright © 2018. Published by Elsevier Inc.

  14. FERMI LARGE AREA TELESCOPE FIRST SOURCE CATALOG

    International Nuclear Information System (INIS)

    Abdo, A. A.; Ackermann, M.; Ajello, M.; Allafort, A.; Bechtol, K.; Berenji, B.; Blandford, R. D.; Bloom, E. D.; Antolini, E.; Bonamente, E.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Bellazzini, R.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bisello, D.; Baughman, B. M.; Belli, F.

    2010-01-01

    We present a catalog of high-energy gamma-ray sources detected by the Large Area Telescope (LAT), the primary science instrument on the Fermi Gamma-ray Space Telescope (Fermi), during the first 11 months of the science phase of the mission, which began on 2008 August 4. The First Fermi-LAT catalog (1FGL) contains 1451 sources detected and characterized in the 100 MeV to 100 GeV range. Source detection was based on the average flux over the 11 month period, and the threshold likelihood Test Statistic is 25, corresponding to a significance of just over 4σ. The 1FGL catalog includes source location regions, defined in terms of elliptical fits to the 95% confidence regions and power-law spectral fits as well as flux measurements in five energy bands for each source. In addition, monthly light curves are provided. Using a protocol defined before launch we have tested for several populations of gamma-ray sources among the sources in the catalog. For individual LAT-detected sources we provide firm identifications or plausible associations with sources in other astronomical catalogs. Identifications are based on correlated variability with counterparts at other wavelengths, or on spin or orbital periodicity. For the catalogs and association criteria that we have selected, 630 of the sources are unassociated. Care was taken to characterize the sensitivity of the results to the model of interstellar diffuse gamma-ray emission used to model the bright foreground, with the result that 161 sources at low Galactic latitudes and toward bright local interstellar clouds are flagged as having properties that are strongly dependent on the model or as potentially being due to incorrectly modeled structure in the Galactic diffuse emission.

  15. Mass spectrometric characterization of a pyrolytic radical source using femtosecond ionization

    Energy Technology Data Exchange (ETDEWEB)

    Frey, H M; Beaud, P; Mischler, B; Radi, P P; Tzannis, A P; Gerber, T [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1997-06-01

    Radicals play, as reactive species, an important role in the chemistry of combustion. In contrast to atmospheric flames where spectra are congested due to high vibrational and rotational excitation, experiments in the cold environment of a molecular beam (MB) yield clean spectra that can be easily attributed to one species by Resonantly Enhanced Multi Photon Ionization (REMP). A pyrolytic radical source has been set up. To characterize the efficiency of the source `soft` ionization with femto second pulses is applied which results in less fragmentation, simplifying the interpretation of the mass spectrum. (author) figs., tabs., refs.

  16. Dual source CT imaging

    International Nuclear Information System (INIS)

    Seidensticker, Peter R.; Hofmann, Lars K.

    2008-01-01

    The introduction of Dual Source Computed Tomography (DSCT) in 2005 was an evolutionary leap in the field of CT imaging. Two x-ray sources operated simultaneously enable heart-rate independent temporal resolution and routine spiral dual energy imaging. The precise delivery of contrast media is a critical part of the contrast-enhanced CT procedure. This book provides an introduction to DSCT technology and to the basics of contrast media administration followed by 25 in-depth clinical scan and contrast media injection protocols. All were developed in consensus by selected physicians on the Dual Source CT Expert Panel. Each protocol is complemented by individual considerations, tricks and pitfalls, and by clinical examples from several of the world's best radiologists and cardiologists. This extensive CME-accredited manual is intended to help readers to achieve consistently high image quality, optimal patient care, and a solid starting point for the development of their own unique protocols. (orig.)

  17. Characterization of γ-ray background at IMAT beamline of ISIS Spallation Neutron Source

    Science.gov (United States)

    Festa, G.; Andreani, C.; Arcidiacono, L.; Burca, G.; Kockelmann, W.; Minniti, T.; Senesi, R.

    2017-08-01

    The environmental γ -ray background on the IMAT beamline at ISIS Spallation Neutron Source, Target Station 2, is characterized via γ spectroscopy. The measurements include gamma exposure at the imaging detector position, along with the gamma background inside the beamline. Present results are discussed and compared with previous measurements recorded at INES and VESUVIO beamlines operating at Target Station 1. They provide new outcome for expanding and optimizing the PGAA experimental capability at the ISIS neutron source for the investigation of materials, engineering components and cultural heritage objects at the ISIS neutron source.

  18. Characterization of γ-ray background at IMAT beamline of ISIS Spallation Neutron Source

    International Nuclear Information System (INIS)

    Festa, G.; Andreani, C.; Arcidiacono, L.; Senesi, R.; Burca, G.; Kockelmann, W.; Minniti, T.

    2017-01-01

    The environmental γ -ray background on the IMAT beamline at ISIS Spallation Neutron Source, Target Station 2, is characterized via γ  spectroscopy. The measurements include gamma exposure at the imaging detector position, along with the gamma background inside the beamline. Present results are discussed and compared with previous measurements recorded at INES and VESUVIO beamlines operating at Target Station 1. They provide new outcome for expanding and optimizing the PGAA experimental capability at the ISIS neutron source for the investigation of materials, engineering components and cultural heritage objects at the ISIS neutron source.

  19. Performance Analysis of On-Demand Routing Protocols in Wireless Mesh Networks

    Directory of Open Access Journals (Sweden)

    Arafatur RAHMAN

    2009-01-01

    Full Text Available Wireless Mesh Networks (WMNs have recently gained a lot of popularity due to their rapid deployment and instant communication capabilities. WMNs are dynamically self-organizing, self-configuring and self-healing with the nodes in the network automatically establishing an adiej hoc network and preserving the mesh connectivity. Designing a routing protocol for WMNs requires several aspects to consider, such as wireless networks, fixed applications, mobile applications, scalability, better performance metrics, efficient routing within infrastructure, load balancing, throughput enhancement, interference, robustness etc. To support communication, various routing protocols are designed for various networks (e.g. ad hoc, sensor, wired etc.. However, all these protocols are not suitable for WMNs, because of the architectural differences among the networks. In this paper, a detailed simulation based performance study and analysis is performed on the reactive routing protocols to verify the suitability of these protocols over such kind of networks. Ad Hoc On-Demand Distance Vector (AODV, Dynamic Source Routing (DSR and Dynamic MANET On-demand (DYMO routing protocol are considered as the representative of reactive routing protocols. The performance differentials are investigated using varying traffic load and number of source. Based on the simulation results, how the performance of each protocol can be improved is also recommended.

  20. Improvement In MAODV Protocol Using Location Based Routing Protocol

    Directory of Open Access Journals (Sweden)

    Kaur Sharnjeet

    2016-01-01

    Full Text Available Energy saving is difficult in wireless sensor network (WSN due to limited resources. Each node in WSN is constrained by their limited battery power for their energy. The energy is reduced as the time goes off due to the packet transmission and reception. Energy management techniques are necessary to minimize the total power consumption of all the nodes in the network in order to maximize its life span. Our proposed protocol Location based routing (LBR aimed to find a path which utilizes the minimum energy to transmit the packets between the source and the destination. The required energy for the transmission and reception of data is evaluated in MATLAB. LBR is implemented on Multicast Ad hoc On Demand Distance Vector Routing Protocol (MAODV to manage the energy consumption in the transmission and reception of data. Simulation results of LBR show the energy consumption has been reduced.

  1. A Novel Multi-Approach Protocol for the Characterization of Occupational Exposure to Organic Dust-Swine Production Case Study.

    Science.gov (United States)

    Viegas, Carla; Faria, Tiago; Monteiro, Ana; Caetano, Liliana Aranha; Carolino, Elisabete; Quintal Gomes, Anita; Viegas, Susana

    2017-12-27

    Swine production has been associated with health risks and workers' symptoms. In Portugal, as in other countries, large-scale swine production involves several activities in the swine environment that require direct intervention, increasing workers' exposure to organic dust. This study describes an updated protocol for the assessment of occupational exposure to organic dust, to unveil an accurate scenario regarding occupational and environmental risks for workers' health. The particle size distribution was characterized regarding mass concentration in five different size ranges (PM0.5, PM1, PM2.5, PM5, PM10). Bioburden was assessed, by both active and passive sampling methods, in air, on surfaces, floor covering and feed samples, and analyzed through culture based-methods and qPCR. Smaller size range particles exhibited the highest counts, with indoor particles showing higher particle counts and mass concentration than outdoor particles. The limit values suggested for total bacteria load were surpassed in 35.7% (10 out of 28) of samples and for fungi in 65.5% (19 out of 29) of samples. Among Aspergillus genera, section Circumdati was the most prevalent (55%) on malt extract agar (MEA) and Versicolores the most identified (50%) on dichloran glycerol (DG18). The results document a wide characterization of occupational exposure to organic dust on swine farms, being useful for policies and stakeholders to act to improve workers' safety. The methods of sampling and analysis employed were the most suitable considering the purpose of the study and should be adopted as a protocol to be followed in future exposure assessments in this occupational environment.

  2. A Novel Multi-Approach Protocol for the Characterization of Occupational Exposure to Organic Dust—Swine Production Case Study

    Directory of Open Access Journals (Sweden)

    Carla Viegas

    2017-12-01

    Full Text Available Swine production has been associated with health risks and workers’ symptoms. In Portugal, as in other countries, large-scale swine production involves several activities in the swine environment that require direct intervention, increasing workers’ exposure to organic dust. This study describes an updated protocol for the assessment of occupational exposure to organic dust, to unveil an accurate scenario regarding occupational and environmental risks for workers’ health. The particle size distribution was characterized regarding mass concentration in five different size ranges (PM0.5, PM1, PM2.5, PM5, PM10. Bioburden was assessed, by both active and passive sampling methods, in air, on surfaces, floor covering and feed samples, and analyzed through culture based-methods and qPCR. Smaller size range particles exhibited the highest counts, with indoor particles showing higher particle counts and mass concentration than outdoor particles. The limit values suggested for total bacteria load were surpassed in 35.7% (10 out of 28 of samples and for fungi in 65.5% (19 out of 29 of samples. Among Aspergillus genera, section Circumdati was the most prevalent (55% on malt extract agar (MEA and Versicolores the most identified (50% on dichloran glycerol (DG18. The results document a wide characterization of occupational exposure to organic dust on swine farms, being useful for policies and stakeholders to act to improve workers’ safety. The methods of sampling and analysis employed were the most suitable considering the purpose of the study and should be adopted as a protocol to be followed in future exposure assessments in this occupational environment.

  3. Development of an eco-protocol for seaweed chlorophylls extraction and possible applications in dye sensitized solar cells

    International Nuclear Information System (INIS)

    Armeli Minicante, S; Ambrosi, E; Back, M; Barichello, J; Cattaruzza, E; Gonella, F; Scantamburlo, E; Trave, E

    2016-01-01

    Seaweeds are a reserve of natural dyes (chlorophylls a , b and c ), characterized by low cost and easy supply, without potential environmental load in terms of land subtraction, and also complying with the requirements of an efficient waste management policy. In particular, the brown seaweed Undaria pinnatifida is a species largely present in the Venice Lagoon area, and for it a removal strategy is actually mandatory. In this paper, we set-up an eco-protocol for the best extraction and preparation procedures of the pigment, with the aim of finding an easy and affordable method for chlorophyll c extraction, exploring at the same time the possibility of using these algae within local sustainable management integrated strategies, among which the possible use of chlorophylls as a dye source in dye sensitized solar cells (DSSCs) is investigated. Experimental results suggest that the developed protocols are useful to optimize the chlorophyll c extraction, as shown by optical absorption spectroscopy measurements. The DSSCs built with the chlorophyll extracted by the proposed eco-protocol exhibit solar energy conversion efficiencies are similar to those obtained following extraction protocols with larger environmental impacts. (paper)

  4. Correlated Sources in Distributed Networks--Data Transmission, Common Information Characterization and Inferencing

    Science.gov (United States)

    Liu, Wei

    2011-01-01

    Correlation is often present among observations in a distributed system. This thesis deals with various design issues when correlated data are observed at distributed terminals, including: communicating correlated sources over interference channels, characterizing the common information among dependent random variables, and testing the presence of…

  5. Renewable Energy Monitoring Protocol. Update 2010. Methodology for the calculation and recording of the amounts of energy produced from renewable sources in the Netherlands

    Energy Technology Data Exchange (ETDEWEB)

    Te Buck, S.; Van Keulen, B.; Bosselaar, L.; Gerlagh, T.; Skelton, T.

    2010-07-15

    This is the fifth, updated edition of the Dutch Renewable Energy Monitoring Protocol. The protocol, compiled on behalf of the Ministry of Economic Affairs, can be considered as a policy document that provides a uniform calculation method for determining the amount of energy produced in the Netherlands in a renewable manner. Because all governments and organisations use the calculation methods described in this protocol, this makes it possible to monitor developments in this field well and consistently. The introduction of this protocol outlines the history and describes its set-up, validity and relationship with other similar documents and agreements. The Dutch Renewable Energy Monitoring Protocol is compiled by NL Agency, and all relevant parties were given the chance to provide input. This has been incorporated as far as is possible. Statistics Netherlands (CBS) uses this protocol to calculate the amount of renewable energy produced in the Netherlands. These data are then used by the Ministry of Economic Affairs to gauge the realisation of policy objectives. In June 2009 the European Directive for energy from renewable sources was published with renewable energy targets for the Netherlands. This directive used a different calculation method - the gross energy end-use method - whilst the Dutch definition is based on the so-called substitution method. NL Agency was asked to add the calculation according to the gross end use method, although this is not clearly defined on a number of points. In describing the method, the unanswered questions become clear, as do, for example, the points the Netherlands should bring up in international discussions.

  6. Entanglement distillation protocols and number theory

    International Nuclear Information System (INIS)

    Bombin, H.; Martin-Delgado, M.A.

    2005-01-01

    We show that the analysis of entanglement distillation protocols for qudits of arbitrary dimension D benefits from applying basic concepts from number theory, since the set Z D n associated with Bell diagonal states is a module rather than a vector space. We find that a partition of Z D n into divisor classes characterizes the invariant properties of mixed Bell diagonal states under local permutations. We construct a very general class of recursion protocols by means of unitary operations implementing these local permutations. We study these distillation protocols depending on whether we use twirling operations in the intermediate steps or not, and we study them both analytically and numerically with Monte Carlo methods. In the absence of twirling operations, we construct extensions of the quantum privacy algorithms valid for secure communications with qudits of any dimension D. When D is a prime number, we show that distillation protocols are optimal both qualitatively and quantitatively

  7. Characterizing multi-photon quantum interference with practical light sources and threshold single-photon detectors

    Science.gov (United States)

    Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos

    2018-04-01

    The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.

  8. Enhancing source location protection in wireless sensor networks

    Science.gov (United States)

    Chen, Juan; Lin, Zhengkui; Wu, Di; Wang, Bailing

    2015-12-01

    Wireless sensor networks are widely deployed in the internet of things to monitor valuable objects. Once the object is monitored, the sensor nearest to the object which is known as the source informs the base station about the object's information periodically. It is obvious that attackers can capture the object successfully by localizing the source. Thus, many protocols have been proposed to secure the source location. However, in this paper, we examine that typical source location protection protocols generate not only near but also highly localized phantom locations. As a result, attackers can trace the source easily from these phantom locations. To address these limitations, we propose a protocol to enhance the source location protection (SLE). With phantom locations far away from the source and widely distributed, SLE improves source location anonymity significantly. Theory analysis and simulation results show that our SLE provides strong source location privacy preservation and the average safety period increases by nearly one order of magnitude compared with existing work with low communication cost.

  9. Protecting single-photon entanglement with practical entanglement source

    Science.gov (United States)

    Zhou, Lan; Ou-Yang, Yang; Wang, Lei; Sheng, Yu-Bo

    2017-06-01

    Single-photon entanglement (SPE) is important for quantum communication and quantum information processing. However, SPE is sensitive to photon loss. In this paper, we discuss a linear optical amplification protocol for protecting SPE. Different from the previous protocols, we exploit the practical spontaneous parametric down-conversion (SPDC) source to realize the amplification, for the ideal entanglement source is unavailable in current quantum technology. Moreover, we prove that the amplification using the entanglement generated from SPDC source as auxiliary is better than the amplification assisted with single photons. The reason is that the vacuum state from SPDC source will not affect the amplification, so that it can be eliminated automatically. This protocol may be useful in future long-distance quantum communications.

  10. Production and characterization of a custom-made {sup 228}Th source with reduced neutron source strength for the Borexino experiment

    Energy Technology Data Exchange (ETDEWEB)

    Maneschg, W., E-mail: werner.maneschg@mpi-hd.mpg.de [Max Planck Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Baudis, L. [Physik Institut der Universitaet Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland); Dressler, R. [Paul Scherrer Institut, CH-5232 Villigen (Switzerland); Eberhardt, K. [Institut fuer Kernchemie, Universitaet Mainz, Fritz-Strassmann-Weg 2, D-55128 Mainz (Germany); Eichler, R. [Paul Scherrer Institut, CH-5232 Villigen (Switzerland); Keller, H. [Institut fuer Kernchemie, Universitaet Mainz, Fritz-Strassmann-Weg 2, D-55128 Mainz (Germany); Lackner, R. [Max Planck Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Praast, B. [Institut fuer Kernchemie, Universitaet Mainz, Fritz-Strassmann-Weg 2, D-55128 Mainz (Germany); Santorelli, R. [Physik Institut der Universitaet Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland); Schreiner, J. [Max Planck Institut fuer Kernphysik, Saupfercheckweg 1, D-69117 Heidelberg (Germany); Tarka, M. [Physik Institut der Universitaet Zuerich, Winterthurerstrasse 190, CH-8057 Zuerich (Switzerland); Wiegel, B.; Zimbal, A. [Physikalisch-Technische Bundesanstalt, Bundesallee 100, D-38116 Braunschweig (Germany)

    2012-07-11

    A custom-made {sup 228}Th source of several MBq activities was produced for the Borexino experiment to study the external background of the detector. The aim was to reduce the unwanted neutron emission produced via ({alpha},n) reactions in ceramics typically used for commercial {sup 228}Th sources. For this purpose a ThCl{sub 4} solution was chemically converted into ThO{sub 2} and embedded in a gold foil. The paper describes the production of the custom-made source and its characterization by means of {gamma}-activity, dose rate and neutron source strength measurements. From {gamma}-spectroscopic measurements it was deduced that activity transfer from the initial solution to the final source was >91% (at 68% C.L.) and the final activity was (5.41{+-}0.30) MBq. The dose rate was measured with two dosimeters yielding 12.1 mSv/h and 14.3 mSv/h in 1 cm distance. The neutron source strength of the 5.41 MBq {sup 228}Th source was determined to be (6.59{+-}0.85) s{sup -1}.

  11. Characterizing performance improvement in primary care systems in Mesoamerica: A realist evaluation protocol.

    Science.gov (United States)

    Munar, Wolfgang; Wahid, Syed S; Curry, Leslie

    2018-01-03

    Background . Improving performance of primary care systems in low- and middle-income countries (LMICs) may be a necessary condition for achievement of universal health coverage in the age of Sustainable Development Goals. The Salud Mesoamerica Initiative (SMI), a large-scale, multi-country program that uses supply-side financial incentives directed at the central-level of governments, and continuous, external evaluation of public, health sector performance to induce improvements in primary care performance in eight LMICs. This study protocol seeks to explain whether and how these interventions generate program effects in El Salvador and Honduras. Methods . This study presents the protocol for a study that uses a realist evaluation approach to develop a preliminary program theory that hypothesizes the interactions between context, interventions and the mechanisms that trigger outcomes. The program theory was completed through a scoping review of relevant empirical, peer-reviewed and grey literature; a sense-making workshop with program stakeholders; and content analysis of key SMI documents. The study will use a multiple case-study design with embedded units with contrasting cases. We define as a case the two primary care systems of Honduras and El Salvador, each with different context characteristics. Data will be collected through in-depth interviews with program actors and stakeholders, documentary review, and non-participatory observation. Data analysis will use inductive and deductive approaches to identify causal patterns organized as 'context, mechanism, outcome' configurations. The findings will be triangulated with existing secondary, qualitative and quantitative data sources, and contrasted against relevant theoretical literature. The study will end with a refined program theory. Findings will be published following the guidelines generated by the Realist and Meta-narrative Evidence Syntheses study (RAMESES II). This study will be performed

  12. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  13. Broadening and Simplifying the First SETI Protocol

    Science.gov (United States)

    Michaud, M. A. G.

    The Declaration of Principles Concerning Activities Following the Detection of Extraterrestrial Intelligence, known informally as the First SETI Protocol, is the primary existing international guidance on this subject. During the fifteen years since the document was issued, several people have suggested revisions or additional protocols. This article proposes a broadened and simplified text that would apply to the detection of alien technology in our solar system as well as to electromagnetic signals from more remote sources.

  14. Exploring REACH as a potential data source for characterizing ecotoxicity in life cycle assessment.

    Science.gov (United States)

    Müller, Nienke; de Zwart, Dick; Hauschild, Michael; Kijko, Gaël; Fantke, Peter

    2017-02-01

    Toxicity models in life cycle impact assessment (LCIA) currently only characterize a small fraction of marketed substances, mostly because of limitations in the underlying ecotoxicity data. One approach to improve the current data situation in LCIA is to identify new data sources, such as the European Registration, Evaluation, Authorisation, and Restriction of Chemicals (REACH) database. The present study explored REACH as a potential data source for LCIA based on matching reported ecotoxicity data for substances that are currently also included in the United Nations Environment Programme/Society for Environmental Toxicology and Chemistry (UNEP/SETAC) scientific consensus model USEtox for characterizing toxicity impacts. Data are evaluated with respect to number of data points, reported reliability, and test duration, and are compared with data listed in USEtox at the level of hazardous concentration for 50% of the covered species per substance. The results emphasize differences between data available via REACH and in USEtox. The comparison of ecotoxicity data from REACH and USEtox shows potential for using REACH ecotoxicity data in LCIA toxicity characterization, but also highlights issues related to compliance of submitted data with REACH requirements as well as different assumptions underlying regulatory risk assessment under REACH versus data needed for LCIA. Thus, further research is required to address data quality, pre-processing, and applicability, before considering data submitted under REACH as a data source for use in LCIA, and also to explore additionally available data sources, published studies, and reports. Environ Toxicol Chem 2017;36:492-500. © 2016 SETAC. © 2016 SETAC.

  15. Characterization of the neutron sources storage pool of the Neutron Standards Laboratory, using Montecarlo Techniques

    International Nuclear Information System (INIS)

    Campo Blanco, X.

    2015-01-01

    The development of irradiation damage resistant materials is one of the most important open fields in the design of experimental facilities and conceptual nucleoelectric fusion plants. The Neutron Standards Laboratory aims to contribute to this development by allowing the neutron irradiation of materials in its calibration neutron sources storage pool. For this purposes, it is essential to characterize the pool itself in terms of neutron fluence and spectra due to the calibration neutron sources. In this work, the main features of this facility are presented and the characterization of the storage pool is carried out. Finally, an application is shown of the obtained results to the neutron irradiation of material.

  16. A compact time-of-flight mass spectrometer for ion source characterization

    International Nuclear Information System (INIS)

    Chen, L.; Wan, X.; Jin, D. Z.; Tan, X. H.; Huang, Z. X.; Tan, G. B.

    2015-01-01

    A compact time-of-flight mass spectrometer with overall dimension of about 413 × 250 × 414 mm based on orthogonal injection and angle reflection has been developed for ion source characterization. Configuration and principle of the time-of-flight mass spectrometer are introduced in this paper. The mass resolution is optimized to be about 1690 (FWHM), and the ion energy detection range is tested to be between about 3 and 163 eV with the help of electron impact ion source. High mass resolution and compact configuration make this spectrometer useful to provide a valuable diagnostic for ion spectra fundamental research and study the mass to charge composition of plasma with wide range of parameters

  17. Protocol for Communication Networking for Formation Flying

    Science.gov (United States)

    Jennings, Esther; Okino, Clayton; Gao, Jay; Clare, Loren

    2009-01-01

    An application-layer protocol and a network architecture have been proposed for data communications among multiple autonomous spacecraft that are required to fly in a precise formation in order to perform scientific observations. The protocol could also be applied to other autonomous vehicles operating in formation, including robotic aircraft, robotic land vehicles, and robotic underwater vehicles. A group of spacecraft or other vehicles to which the protocol applies could be characterized as a precision-formation- flying (PFF) network, and each vehicle could be characterized as a node in the PFF network. In order to support precise formation flying, it would be necessary to establish a corresponding communication network, through which the vehicles could exchange position and orientation data and formation-control commands. The communication network must enable communication during early phases of a mission, when little positional knowledge is available. Particularly during early mission phases, the distances among vehicles may be so large that communication could be achieved only by relaying across multiple links. The large distances and need for omnidirectional coverage would limit communication links to operation at low bandwidth during these mission phases. Once the vehicles were in formation and distances were shorter, the communication network would be required to provide high-bandwidth, low-jitter service to support tight formation-control loops. The proposed protocol and architecture, intended to satisfy the aforementioned and other requirements, are based on a standard layered-reference-model concept. The proposed application protocol would be used in conjunction with conventional network, data-link, and physical-layer protocols. The proposed protocol includes the ubiquitous Institute of Electrical and Electronics Engineers (IEEE) 802.11 medium access control (MAC) protocol to be used in the datalink layer. In addition to its widespread and proven use in

  18. Protocol-based care: the standardisation of decision-making?

    Science.gov (United States)

    Rycroft-Malone, Jo; Fontenla, Marina; Seers, Kate; Bick, Debra

    2009-05-01

    To explore how protocol-based care affects clinical decision-making. In the context of evidence-based practice, protocol-based care is a mechanism for facilitating the standardisation of care and streamlining decision-making through rationalising the information with which to make judgements and ultimately decisions. However, whether protocol-based care does, in the reality of practice, standardise decision-making is unknown. This paper reports on a study that explored the impact of protocol-based care on nurses' decision-making. Theoretically informed by realistic evaluation and the promoting action on research implementation in health services framework, a case study design using ethnographic methods was used. Two sites were purposively sampled; a diabetic and endocrine unit and a cardiac medical unit. Within each site, data collection included observation, postobservation semi-structured interviews with staff and patients, field notes, feedback sessions and document review. Data were inductively and thematically analysed. Decisions made by nurses in both sites were varied according to many different and interacting factors. While several standardised care approaches were available for use, in reality, a variety of information sources informed decision-making. The primary approach to knowledge exchange and acquisition was person-to-person; decision-making was a social activity. Rarely were standardised care approaches obviously referred to; nurses described following a mental flowchart, not necessarily linked to a particular guideline or protocol. When standardised care approaches were used, it was reported that they were used flexibly and particularised. While the logic of protocol-based care is algorithmic, in the reality of clinical practice, other sources of information supported nurses' decision-making process. This has significant implications for the political goal of standardisation. The successful implementation and judicious use of tools such as

  19. Characterization of the γ background in epithermal neutron scattering measurements at pulsed neutron sources

    International Nuclear Information System (INIS)

    Pietropaolo, A.; Tardocchi, M.; Schooneveld, E.M.; Senesi, R.

    2006-01-01

    This paper reports the characterization of the different components of the γ background in epithermal neutron scattering experiments at pulsed neutron sources. The measurements were performed on the VESUVIO spectrometer at ISIS spallation neutron source. These measurements, carried out with a high purity germanium detector, aim to provide detailed information for the investigation of the effect of the γ energy discrimination on the signal-to-background ratio. It is shown that the γ background is produced by different sources that can be identified with their relative time structure and relative weight

  20. Characterization of polarized electrons coming from helium post-discharge source

    International Nuclear Information System (INIS)

    Zerhouni, R.O.

    1996-02-01

    The objective of this thesis is the characterization of the polarized electron source developed at Orsay and foreseen to be coupled to a cw accelerator for nuclear physics experiments. The principle of operation of this source relies on the chemo-ionization reaction between optically aligned helium triplet metastable atoms and CO 2 molecules. The helium metastable atoms are generated by injection of purified helium into a 2,45 GHz micro-wave discharge. They are optically pumped using two beams of 1,083 micro-meter resonant radiation, one circularly and the other linearly polarized. Both beams are delivered by a high power LNA laser. The metastable atomic beam interacts with a dense (10 13 cm -3 ) spin singlet CO 2 target. A fraction of the produced polarized electrons is extracted and collimated by electrostatic optics. Either to the Mott polarimeter or to the Faraday cup in order to measure the electron polarization and extracted current. For current intensities of 100 micro-Amperes, the electronic polarization reaches 62 % and shows that this type of source has reached the same high competitive level as the most performing GaAs ones. Additionally, the optical properties of the extracted beam are found to be excellent. These properties (energy spread and emittance) reflect the electron energy distribution at the chemo-ionization region. The upper limit of the beam's energy spread is 0.24 eV since this value characterizes our instrumental resolution. The average normalized emittance is found to be 0.6 pi mm-mrad. These values satisfy the requirements of most cw accelerators. All the measurements were performed at low electron beam transport energies (1 to 2 KeV). (author). 105 refs., 54 figs., 4 tabs

  1. WDM Network and Multicasting Protocol Strategies

    Directory of Open Access Journals (Sweden)

    Pinar Kirci

    2014-01-01

    Full Text Available Optical technology gains extensive attention and ever increasing improvement because of the huge amount of network traffic caused by the growing number of internet users and their rising demands. However, with wavelength division multiplexing (WDM, it is easier to take the advantage of optical networks and optical burst switching (OBS and to construct WDM networks with low delay rates and better data transparency these technologies are the best choices. Furthermore, multicasting in WDM is an urgent solution for bandwidth-intensive applications. In the paper, a new multicasting protocol with OBS is proposed. The protocol depends on a leaf initiated structure. The network is composed of source, ingress switches, intermediate switches, edge switches, and client nodes. The performance of the protocol is examined with Just Enough Time (JET and Just In Time (JIT reservation protocols. Also, the paper involves most of the recent advances about WDM multicasting in optical networks. WDM multicasting in optical networks is given as three common subtitles: Broadcast and-select networks, wavelength-routed networks, and OBS networks. Also, in the paper, multicast routing protocols are briefly summarized and optical burst switched WDM networks are investigated with the proposed multicast schemes.

  2. Spectrally resolved, broadband frequency response characterization of photodetectors using continuous-wave supercontinuum sources

    Science.gov (United States)

    Choudhury, Vishal; Prakash, Roopa; Nagarjun, K. P.; Supradeepa, V. R.

    2018-02-01

    A simple and powerful method using continuous wave supercontinuum lasers is demonstrated to perform spectrally resolved, broadband frequency response characterization of photodetectors in the NIR Band. In contrast to existing techniques, this method allows for a simple system to achieve the goal, requiring just a standard continuous wave(CW) high-power fiber laser source and an RF spectrum analyzer. From our recent work, we summarize methods to easily convert any high-power fiber laser into a CW supercontinuum. These sources in the time domain exhibit interesting properties all the way down to the femtosecond time scale. This enables measurement of broadband frequency response of photodetectors while the wide optical spectrum of the supercontinuum can be spectrally filtered to obtain this information in a spectrally resolved fashion. The method involves looking at the RF spectrum of the output of a photodetector under test when incident with the supercontinuum. By using prior knowledge of the RF spectrum of the source, the frequency response can be calculated. We utilize two techniques for calibration of the source spectrum, one using a prior measurement and the other relying on a fitted model. Here, we characterize multiple photodetectors from 150MHz bandwidth to >20GHz bandwidth at multiple bands in the NIR region. We utilize a supercontinuum source spanning over 700nm bandwidth from 1300nm to 2000nm. For spectrally resolved measurement, we utilize multiple wavelength bands such as around 1400nm and 1600nm. Interesting behavior was observed in the frequency response of the photodetectors when comparing broadband spectral excitation versus narrower band excitation.

  3. Characterization of our source of polarization-entangled photons

    Science.gov (United States)

    Adenier, Guillaume

    2012-12-01

    We present our source of polarization entangled photons, which consist of orthogonally polarized and collinear parametric down converted photons sent to the same input of a nonpolarizing beam splitter. We show that a too straightforward characterization of the quantum state cannot account for all the experimental observations, in particular for the behavior of the doublecounts, which are the coincidences produced whenever both photons are dispatched by the beam splitter to the same measuring station (either Alice or Bob). We argue that in order to account for all observations, the state has to be entangled in polarization before the non-polarizing beam splitter, and we discuss the intriguing and nevertheless essential role of the time-compensation required to obtain such a polarization entanglement.

  4. Protocols and guidelines for mobile chest radiography in Irish public hospitals

    International Nuclear Information System (INIS)

    Kelly, Amanda; Toomey, Rachel

    2015-01-01

    Background: The mobile chest radiograph is a highly variable examination, in both technique and setting. Protocols and guidelines are one method by which examinations can be standardised, and provide information when one is unsure how to proceed. This study was undertaken to investigate the existence of protocols and guidelines available for the mobile chest radiograph, to establish their nature and compare them under a variety of headings. Methodology: A postal survey was administered to the Radiography Service Managers in the public hospitals under the governance of the Health Service Executive (HSE) in Ireland. The survey contained questions regarding hospital demographics, contents of existing protocols or guidelines, and why a protocol or guideline was not in place, if this was the case. Results: The response rate to the survey was 62% (n = 24). Those that had a specific protocol in place amounted to 63% (n = 15), 71% (n = 17) had a specific guideline, and 63% (n = 15) had both. Twenty nine percent (n = 7) had no specific protocol/guideline in place. Scientific research (88%, n = 15) and radiographer experience (82%, n = 14) were the most common sources used to inform protocols and guidelines. Conclusions: There are protocols and guidelines available to radiographers for mobile chest radiography in the majority of public hospitals in Ireland. The nature of the protocols and guidelines generally coincides with the HSE guidance regarding what sources of information should be used and how often they should be updated

  5. Latency correction of event-related potentials between different experimental protocols

    Science.gov (United States)

    Iturrate, I.; Chavarriaga, R.; Montesano, L.; Minguez, J.; Millán, JdR

    2014-06-01

    Objective. A fundamental issue in EEG event-related potentials (ERPs) studies is the amount of data required to have an accurate ERP model. This also impacts the time required to train a classifier for a brain-computer interface (BCI). This issue is mainly due to the poor signal-to-noise ratio and the large fluctuations of the EEG caused by several sources of variability. One of these sources is directly related to the experimental protocol or application designed, and may affect the amplitude or latency of ERPs. This usually prevents BCI classifiers from generalizing among different experimental protocols. In this paper, we analyze the effect of the amplitude and the latency variations among different experimental protocols based on the same type of ERP. Approach. We present a method to analyze and compensate for the latency variations in BCI applications. The algorithm has been tested on two widely used ERPs (P300 and observation error potentials), in three experimental protocols in each case. We report the ERP analysis and single-trial classification. Main results. The results obtained show that the designed experimental protocols significantly affect the latency of the recorded potentials but not the amplitudes. Significance. These results show how the use of latency-corrected data can be used to generalize the BCIs, reducing the calibration time when facing a new experimental protocol.

  6. Monte Carlo dosimetric characterization of the Flexisource Co-60 high-dose-rate brachytherapy source using PENELOPE.

    Science.gov (United States)

    Almansa, Julio F; Guerrero, Rafael; Torres, Javier; Lallena, Antonio M

    60 Co sources have been commercialized as an alternative to 192 Ir sources for high-dose-rate (HDR) brachytherapy. One of them is the Flexisource Co-60 HDR source manufactured by Elekta. The only available dosimetric characterization of this source is that of Vijande et al. [J Contemp Brachytherapy 2012; 4:34-44], whose results were not included in the AAPM/ESTRO consensus document. In that work, the dosimetric quantities were calculated as averages of the results obtained with the Geant4 and PENELOPE Monte Carlo (MC) codes, though for other sources, significant differences have been quoted between the values obtained with these two codes. The aim of this work is to perform the dosimetric characterization of the Flexisource Co-60 HDR source using PENELOPE. The MC simulation code PENELOPE (v. 2014) has been used. Following the recommendations of the AAPM/ESTRO report, the radial dose function, the anisotropy function, the air-kerma strength, the dose rate constant, and the absorbed dose rate in water have been calculated. The results we have obtained exceed those of Vijande et al. In particular, the absorbed dose rate constant is ∼0.85% larger. A similar difference is also found in the other dosimetric quantities. The effect of the electrons emitted in the decay of 60 Co, usually neglected in this kind of simulations, is significant up to the distances of 0.25 cm from the source. The systematic and significant differences we have found between PENELOPE results and the average values found by Vijande et al. point out that the dosimetric characterizations carried out with the various MC codes should be provided independently. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  7. Sources of uncertainty in characterizing health risks from flare emissions

    International Nuclear Information System (INIS)

    Hrudey, S.E.

    2000-01-01

    The assessment of health risks associated with gas flaring was the focus of this paper. Health risk assessments for environmental decision-making includes the evaluation of scientific data to identify hazards and to determine dose-response assessments, exposure assessments and risk characterization. Gas flaring has been the cause for public health concerns in recent years, most notably since 1996 after a published report by the Alberta Research Council. Some of the major sources of uncertainty associated with identifying hazardous contaminants in flare emissions were discussed. Methods to predict human exposures to emitted contaminants were examined along with risk characterization of predicted exposures to several identified contaminants. One of the problems is that elemental uncertainties exist regarding flare emissions which places limitations of the degree of reassurance that risk assessment can provide, but risk assessment can nevertheless offer some guidance to those responsible for flare emissions

  8. Physics-electrical hybrid model for real time impedance matching and remote plasma characterization in RF plasma sources.

    Science.gov (United States)

    Sudhir, Dass; Bandyopadhyay, M; Chakraborty, A

    2016-02-01

    Plasma characterization and impedance matching are an integral part of any radio frequency (RF) based plasma source. In long pulse operation, particularly in high power operation where plasma load may vary due to different reasons (e.g. pressure and power), online tuning of impedance matching circuit and remote plasma density estimation are very useful. In some cases, due to remote interfaces, radio activation and, due to maintenance issues, power probes are not allowed to be incorporated in the ion source design for plasma characterization. Therefore, for characterization and impedance matching, more remote schemes are envisaged. Two such schemes by the same authors are suggested in these regards, which are based on air core transformer model of inductive coupled plasma (ICP) [M. Bandyopadhyay et al., Nucl. Fusion 55, 033017 (2015); D. Sudhir et al., Rev. Sci. Instrum. 85, 013510 (2014)]. However, the influence of the RF field interaction with the plasma to determine its impedance, a physics code HELIC [D. Arnush, Phys. Plasmas 7, 3042 (2000)] is coupled with the transformer model. This model can be useful for both types of RF sources, i.e., ICP and helicon sources.

  9. Experimental eavesdropping attack against Ekert's protocol based on Wigner's inequality

    International Nuclear Information System (INIS)

    Bovino, F. A.; Colla, A. M.; Castagnoli, G.; Castelletto, S.; Degiovanni, I. P.; Rastello, M. L.

    2003-01-01

    We experimentally implemented an eavesdropping attack against the Ekert protocol for quantum key distribution based on the Wigner inequality. We demonstrate a serious lack of security of this protocol when the eavesdropper gains total control of the source. In addition we tested a modified Wigner inequality which should guarantee a secure quantum key distribution

  10. User-oriented end-to-end transport protocols for the real-time distribution of telemetry data from NASA spacecraft

    Science.gov (United States)

    Hooke, A. J.

    1979-01-01

    A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.

  11. Evolution of a C2 protocol gateway

    CSIR Research Space (South Africa)

    Duvenhage, A

    2008-06-01

    Full Text Available real-time or slower than real-time, depending on the type of link and the data source: The gateway normally reads logged raw protocol data much faster than real-time; a simulation could also start running slower than real-time if it requires too... inherent support for logging and playback of the raw protocol data. From a software architecture perspective, each link has a corresponding link component that is responsible for opening and closing the connection, as well as reading and writing...

  12. Lead Sampling Protocols: Why So Many and What Do They Tell You?

    Science.gov (United States)

    Sampling protocols can be broadly categorized based on their intended purpose of 1) Pb regulatory compliance/corrosion control efficacy, 2) Pb plumbing source determination or Pb type identification, and 3) Pb exposure assessment. Choosing the appropriate protocol is crucial to p...

  13. Applications of Ground-based Mobile Atmospheric Monitoring: Real-time Characterization of Source Emissions and Ambient Concentrations

    Science.gov (United States)

    Goetz, J. Douglas

    Gas and particle phase atmospheric pollution are known to impact human and environmental health as well as contribute to climate forcing. While many atmospheric pollutants are regulated or controlled in the developed world uncertainty still remains regarding the impacts from under characterized emission sources, the interaction of anthropogenic and naturally occurring pollution, and the chemical and physical evolution of emissions in the atmosphere, among many other uncertainties. Because of the complexity of atmospheric pollution many types of monitoring have been implemented in the past, but none are capable of perfectly characterizing the atmosphere and each monitoring type has known benefits and disadvantages. Ground-based mobile monitoring with fast-response in-situ instrumentation has been used in the past for a number of applications that fill data gaps not possible with other types of atmospheric monitoring. In this work, ground-based mobile monitoring was implemented to quantify emissions from under characterized emission sources using both moving and portable applications, and used in a novel way for the characterization of ambient concentrations. In the Marcellus Shale region of Pennsylvania two mobile platforms were used to estimate emission rates from infrastructure associated with the production and transmission of natural gas using two unique methods. One campaign investigated emissions of aerosols, volatile organic compounds (VOCs), methane, carbon monoxide (CO), nitrogen dioxide (NO2), and carbon dioxide (CO 2) from natural gas wells, well development practices, and compressor stations using tracer release ratio methods and a developed fenceline tracer release correction factor. Another campaign investigated emissions of methane from Marcellus Shale gas wells and infrastructure associated with two large national transmission pipelines using the "Point Source Gaussian" method described in the EPA OTM-33a. During both campaigns ambient concentrations

  14. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  15. Characterization of dynamic changes of current source localization based on spatiotemporal fMRI constrained EEG source imaging

    Science.gov (United States)

    Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun

    2018-06-01

    Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be

  16. Physiological and biochemical characterization of Azospirillum brasilense strains commonly used as plant growth-promoting rhizobacteria.

    Science.gov (United States)

    Di Salvo, Luciana P; Silva, Esdras; Teixeira, Kátia R S; Cote, Rosalba Esquivel; Pereyra, M Alejandra; García de Salamone, Inés E

    2014-12-01

    Azospirillum is a plant growth-promoting rhizobacteria (PGPR) genus vastly studied and utilized as agriculture inoculants. Isolation of new strains under different environmental conditions allows the access to the genetic diversity and improves the success of inoculation procedures. Historically, the isolation of this genus has been performed by the use of some traditional culture media. In this work we characterized the physiology and biochemistry of five different A. brasilense strains, commonly used as cereal inoculants. The aim of this work is to contribute to pose into revision some concepts concerning the most used protocols to isolate and characterize this bacterium. We characterized their growth in different traditional and non-traditional culture media, evaluated some PGPR mechanisms and characterized their profiles of fatty acid methyl esters and carbon-source utilization. This work shows, for the first time, differences in both profiles, and ACC deaminase activity of A. brasilense strains. Also, we show unexpected results obtained in some of the evaluated culture media. Results obtained here and an exhaustive knowledge revision revealed that it is not appropriate to conclude about bacterial species without analyzing several strains. Also, it is necessary to continue developing studies and laboratory techniques to improve the isolation and characterization protocols. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. NGSI student activities in open source information analysis in support of the training program of the U.S. DOE laboratories for the entry into force of the additional protocol

    Energy Technology Data Exchange (ETDEWEB)

    Sandoval, M Analisa [Los Alamos National Laboratory; Uribe, Eva C [Los Alamos National Laboratory; Sandoval, Marisa N [Los Alamos National Laboratory; Boyer, Brian D [Los Alamos National Laboratory; Stevens, Rebecca S [Los Alamos National Laboratory

    2009-01-01

    In 2008 a joint team from Los Alamos National Laboratory (LANL) and Brookhaven National Laboratory (BNL) consisting of specialists in training of IAEA inspectors in the use of complementary access activities formulated a training program to prepare the U.S. Doe laboratories for the entry into force of the Additional Protocol. As a major part of the support of the activity, LANL summer interns provided open source information analysis to the LANL-BNL mock inspection team. They were a part of the Next Generation Safeguards Initiative's (NGSI) summer intern program aimed at producing the next generation of safeguards specialists. This paper describes how they used open source information to 'backstop' the LANL-BNL team's effort to construct meaningful Additional Protocol Complementary Access training scenarios for each of the three DOE laboratories, Lawrence Livermore National Laboratory, Idaho National Laboratory, and Oak Ridge National Laboratory.

  18. Comparison of MANET Routing Protocols in Different Traffic and Mobility Models

    Directory of Open Access Journals (Sweden)

    J. Baraković

    2010-06-01

    Full Text Available Routing protocol election in MANET (Mobile Ad Hoc Network is a great challenge, because of its frequent topology changes and routing overhead. This paper compares performances of three routing protocols: Destination Sequenced Distance Vector (DSDV, Ad Hoc Ondemand Distance Vector (AODV and Dynamic Source Routing (DSR, based on results analysis obtained using simulations with different load and mobility scenarios performed with Network Simulator version 2 (NS-2. In low load and low mobility scenarios routing protocols perform in a similar manner. However, with mobility and load increasing DSR outperforms AODV and DSDV protocols.

  19. Characterization of strong (241)Am sources.

    Science.gov (United States)

    Vesterlund, Anna; Chernikova, Dina; Cartemo, Petty; Axell, Kåre; Nordlund, Anders; Skarnemark, Gunnar; Ekberg, Christian; Ramebäck, Henrik

    2015-05-01

    Gamma ray spectra of strong (241)Am sources may reveal information about the source composition as there may be other radioactive nuclides such as progeny and radioactive impurities present. In this work the possibility to use gamma spectrometry to identify inherent signatures in (241)Am sources in order to differentiate sources from each other, is investigated. The studied signatures are age, i.e. time passed since last chemical separation, and presence of impurities. The spectra of some sources show a number of Doppler broadened peaks in the spectrum which indicate the presence of nuclear reactions on light elements within the sources. The results show that the investigated sources can be differentiated between by age and/or presence of impurities. These spectral features would be useful information in a national nuclear forensics library (NNFL) in cases when the visual information on the source, e.g. the source number, is unavailable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Direct Cellular Lysis/Protein Extraction Protocol for Soil Metaproteomics

    Energy Technology Data Exchange (ETDEWEB)

    Chourey, Karuna [ORNL; Jansson, Janet [Lawrence Berkeley National Laboratory (LBNL); Verberkmoes, Nathan C [ORNL; Shah, Manesh B [ORNL; Chavarria, Krystle L. [Lawrence Berkeley National Laboratory (LBNL); Tom, Lauren M [Lawrence Berkeley National Laboratory (LBNL); Brodie, Eoin L. [Lawrence Berkeley National Laboratory (LBNL); Hettich, Robert {Bob} L [ORNL

    2010-01-01

    We present a novel direct protocol for deep proteome characterization of microorganisms in soil. The method employs thermally assisted detergent-based cellular lysis (SDS) of soil samples, followed by TCA precipitation for proteome extraction/cleanup prior to liquid chromatography-mass spectrometric characterization. This approach was developed and optimized using different soils inoculated with genome-sequenced bacteria (Gram-negative Pseudomonas putida or Gram-positive Arthrobacter chlorophenolicus). Direct soil protein extraction was compared to protein extraction from cells isolated from the soil matrix prior to lysis (indirect method). Each approach resulted in identification of greater than 500 unique proteins, with a wide range in molecular mass and functional categories. To our knowledge, this SDS-TCA approach enables the deepest proteome characterizations of microbes in soil to date, without significant biases in protein size, localization, or functional category compared to pure cultures. This protocol should provide a powerful tool for ecological studies of soil microbial communities.

  1. Source characterization and exposure modeling of gas-phase polycyclic aromatic hydrocarbon (PAH) concentrations in Southern California

    Science.gov (United States)

    Masri, Shahir; Li, Lianfa; Dang, Andy; Chung, Judith H.; Chen, Jiu-Chiuan; Fan, Zhi-Hua (Tina); Wu, Jun

    2018-03-01

    Airborne exposures to polycyclic aromatic hydrocarbons (PAHs) are associated with adverse health outcomes. Because personal air measurements of PAHs are labor intensive and costly, spatial PAH exposure models are useful for epidemiological studies. However, few studies provide adequate spatial coverage to reflect intra-urban variability of ambient PAHs. In this study, we collected 39-40 weekly gas-phase PAH samples in southern California twice in summer and twice in winter, 2009, in order to characterize PAH source contributions and develop spatial models that can estimate gas-phase PAH concentrations at a high resolution. A spatial mixed regression model was constructed, including such variables as roadway, traffic, land-use, vegetation index, commercial cooking facilities, meteorology, and population density. Cross validation of the model resulted in an R2 of 0.66 for summer and 0.77 for winter. Results showed higher total PAH concentrations in winter. Pyrogenic sources, such as fossil fuels and diesel exhaust, were the most dominant contributors to total PAHs. PAH sources varied by season, with a higher fossil fuel and wood burning contribution in winter. Spatial autocorrelation accounted for a substantial amount of the variance in total PAH concentrations for both winter (56%) and summer (19%). In summer, other key variables explaining the variance included meteorological factors (9%), population density (15%), and roadway length (21%). In winter, the variance was also explained by traffic density (16%). In this study, source characterization confirmed the dominance of traffic and other fossil fuel sources to total measured gas-phase PAH concentrations while a spatial exposure model identified key predictors of PAH concentrations. Gas-phase PAH source characterization and exposure estimation is of high utility to epidemiologist and policy makers interested in understanding the health impacts of gas-phase PAHs and strategies to reduce emissions.

  2. Bayesian adaptive survey protocols for resource management

    Science.gov (United States)

    Halstead, Brian J.; Wylie, Glenn D.; Coates, Peter S.; Casazza, Michael L.

    2011-01-01

    Transparency in resource management decisions requires a proper accounting of uncertainty at multiple stages of the decision-making process. As information becomes available, periodic review and updating of resource management protocols reduces uncertainty and improves management decisions. One of the most basic steps to mitigating anthropogenic effects on populations is determining if a population of a species occurs in an area that will be affected by human activity. Species are rarely detected with certainty, however, and falsely declaring a species absent can cause improper conservation decisions or even extirpation of populations. We propose a method to design survey protocols for imperfectly detected species that accounts for multiple sources of uncertainty in the detection process, is capable of quantitatively incorporating expert opinion into the decision-making process, allows periodic updates to the protocol, and permits resource managers to weigh the severity of consequences if the species is falsely declared absent. We developed our method using the giant gartersnake (Thamnophis gigas), a threatened species precinctive to the Central Valley of California, as a case study. Survey date was negatively related to the probability of detecting the giant gartersnake, and water temperature was positively related to the probability of detecting the giant gartersnake at a sampled location. Reporting sampling effort, timing and duration of surveys, and water temperatures would allow resource managers to evaluate the probability that the giant gartersnake occurs at sampled sites where it is not detected. This information would also allow periodic updates and quantitative evaluation of changes to the giant gartersnake survey protocol. Because it naturally allows multiple sources of information and is predicated upon the idea of updating information, Bayesian analysis is well-suited to solving the problem of developing efficient sampling protocols for species of

  3. Open Source Platform Application to Groundwater Characterization and Monitoring

    Science.gov (United States)

    Ntarlagiannis, D.; Day-Lewis, F. D.; Falzone, S.; Lane, J. W., Jr.; Slater, L. D.; Robinson, J.; Hammett, S.

    2017-12-01

    Groundwater characterization and monitoring commonly rely on the use of multiple point sensors and human labor. Due to the number of sensors, labor, and other resources needed, establishing and maintaining an adequate groundwater monitoring network can be both labor intensive and expensive. To improve and optimize the monitoring network design, open source software and hardware components could potentially provide the platform to control robust and efficient sensors thereby reducing costs and labor. This work presents early attempts to create a groundwater monitoring system incorporating open-source software and hardware that will control the remote operation of multiple sensors along with data management and file transfer functions. The system is built around a Raspberry PI 3, that controls multiple sensors in order to perform on-demand, continuous or `smart decision' measurements while providing flexibility to incorporate additional sensors to meet the demands of different projects. The current objective of our technology is to monitor exchange of ionic tracers between mobile and immobile porosity using a combination of fluid and bulk electrical-conductivity measurements. To meet this objective, our configuration uses four sensors (pH, specific conductance, pressure, temperature) that can monitor the fluid electrical properties of interest and guide the bulk electrical measurement. This system highlights the potential of using open source software and hardware components for earth sciences applications. The versatility of the system makes it ideal for use in a large number of applications, and the low cost allows for high resolution (spatially and temporally) monitoring.

  4. nDPI: Open-Source High-Speed Deep Packet Inspection

    DEFF Research Database (Denmark)

    Deri, Luca; Martinelli, Maurizio; Bujlow, Tomasz

    2014-01-01

    protocols became increasingly challenging, thus creating a motivation for creating tools and libraries for network protocol classification. This paper covers the design and implementation of nDPI, an open-source library for protocol classification using both packet header and payload. nDPI was extensively...

  5. Novel techniques for characterization of hydrocarbon emission sources in the Barnett Shale

    Science.gov (United States)

    Nathan, Brian Joseph

    Changes in ambient atmospheric hydrocarbon concentrations can have both short-term and long-term effects on the atmosphere and on human health. Thus, accurate characterization of emissions sources is critically important. The recent boom in shale gas production has led to an increase in hydrocarbon emissions from associated processes, though the exact extent is uncertain. As an original quantification technique, a model airplane equipped with a specially-designed, open-path methane sensor was flown multiple times over a natural gas compressor station in the Barnett Shale in October 2013. A linear optimization was introduced to a standard Gaussian plume model in an effort to determine the most probable emission rate coming from the station. This is shown to be a suitable approach given an ideal source with a single, central plume. Separately, an analysis was performed to characterize the nonmethane hydrocarbons in the Barnett during the same period. Starting with ambient hourly concentration measurements of forty-six hydrocarbon species, Lagrangian air parcel trajectories were implemented in a meteorological model to extend the resolution of these measurements and achieve domain-fillings of the region for the period of interest. A self-organizing map (a type of unsupervised classification) was then utilized to reduce the dimensionality of the total multivariate set of grids into characteristic one-dimensional signatures. By also introducing a self-organizing map classification of the contemporary wind measurements, the spatial hydrocarbon characterizations are analyzed for periods with similar wind conditions. The accuracy of the classification is verified through assessment of observed spatial mixing ratio enhancements of key species, through site-comparisons with a related long-term study, and through a random forest analysis (an ensemble learning method of supervised classification) to determine the most important species for defining key classes. The hydrocarbon

  6. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities

    International Nuclear Information System (INIS)

    Coppersmith, Kevin J.; Salomone, Lawrence A.; Fuller, Chris W.; Glaser, Laura L.; Hanson, Kathryn L.; Hartleb, Ross D.; Lettis, William R.; Lindvall, Scott C.; McDuffie, Stephen M.; McGuire, Robin K.; Stirewalt, Gerry L.; Toro, Gabriel R.; Youngs, Robert R.; Slayter, David L.; Bozkurt, Serkan B.; Cumbest, Randolph J.; Falero, Valentina Montaldo; Perman, Roseanne C.; Shumway, Allison M.; Syms, Frank H.; Tuttle, Martitia P.

    2012-01-01

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  7. Enhanced DSR Routing Protocol for the Short Time Disconnected MANET

    Directory of Open Access Journals (Sweden)

    PAPAJ Ján

    2013-05-01

    Full Text Available Data delivery in Mobile Ad-Hoc network (MANET is a very difficult task due to the fact the sporadic connections between mobile nodes. For this reason, we introduce the new modified routing protocol that enables the data delivery in the case that the connections are disconnected. A key aspect of the protocol is a process of finding connections between source and destination nodes that can provide low end-to-end delay and better delivery performance in a disconnected MANET. The protocol provides the concepts of opportunistic routing of the routing packets in disconnected MANETs. In this paper we present a modification of the DSR routing protocol and also some results of a simulation.

  8. Emergency Protocol and Violence Prevention in a University Setting

    Science.gov (United States)

    Rust, Dylan

    2012-01-01

    This study analyzed the emergency protocol and violence prevention methods utilized at an American university. The four research questions were: (1) What are the sources of violence at the university? a. How has the university addressed these sources? (2) What constitutes an emergency in the eyes of the university? (3) How do emergency protocols…

  9. Laser-ablation-based ion source characterization and manipulation for laser-driven ion acceleration

    Science.gov (United States)

    Sommer, P.; Metzkes-Ng, J.; Brack, F.-E.; Cowan, T. E.; Kraft, S. D.; Obst, L.; Rehwald, M.; Schlenvoigt, H.-P.; Schramm, U.; Zeil, K.

    2018-05-01

    For laser-driven ion acceleration from thin foils (∼10 μm–100 nm) in the target normal sheath acceleration regime, the hydro-carbon contaminant layer at the target surface generally serves as the ion source and hence determines the accelerated ion species, i.e. mainly protons, carbon and oxygen ions. The specific characteristics of the source layer—thickness and relevant lateral extent—as well as its manipulation have both been investigated since the first experiments on laser-driven ion acceleration using a variety of techniques from direct source imaging to knife-edge or mesh imaging. In this publication, we present an experimental study in which laser ablation in two fluence regimes (low: F ∼ 0.6 J cm‑2, high: F ∼ 4 J cm‑2) was applied to characterize and manipulate the hydro-carbon source layer. The high-fluence ablation in combination with a timed laser pulse for particle acceleration allowed for an estimation of the relevant source layer thickness for proton acceleration. Moreover, from these data and independently from the low-fluence regime, the lateral extent of the ion source layer became accessible.

  10. Optimized exosome isolation protocol for cell culture supernatant and human plasma

    Directory of Open Access Journals (Sweden)

    Richard J. Lobb

    2015-07-01

    Full Text Available Extracellular vesicles represent a rich source of novel biomarkers in the diagnosis and prognosis of disease. However, there is currently limited information elucidating the most efficient methods for obtaining high yields of pure exosomes, a subset of extracellular vesicles, from cell culture supernatant and complex biological fluids such as plasma. To this end, we comprehensively characterize a variety of exosome isolation protocols for their efficiency, yield and purity of isolated exosomes. Repeated ultracentrifugation steps can reduce the quality of exosome preparations leading to lower exosome yield. We show that concentration of cell culture conditioned media using ultrafiltration devices results in increased vesicle isolation when compared to traditional ultracentrifugation protocols. However, our data on using conditioned media isolated from the Non-Small-Cell Lung Cancer (NSCLC SK-MES-1 cell line demonstrates that the choice of concentrating device can greatly impact the yield of isolated exosomes. We find that centrifuge-based concentrating methods are more appropriate than pressure-driven concentrating devices and allow the rapid isolation of exosomes from both NSCLC cell culture conditioned media and complex biological fluids. In fact to date, no protocol detailing exosome isolation utilizing current commercial methods from both cells and patient samples has been described. Utilizing tunable resistive pulse sensing and protein analysis, we provide a comparative analysis of 4 exosome isolation techniques, indicating their efficacy and preparation purity. Our results demonstrate that current precipitation protocols for the isolation of exosomes from cell culture conditioned media and plasma provide the least pure preparations of exosomes, whereas size exclusion isolation is comparable to density gradient purification of exosomes. We have identified current shortcomings in common extracellular vesicle isolation methods and provide a

  11. Quantitative characterization of urban sources of organic aerosol by high-resolution gas chromatography

    International Nuclear Information System (INIS)

    Hildemann, L.M.; Mazurek, M.A.; Cass, G.R.; Simoneit, B.R.T.

    1991-01-01

    Fine aerosol emissions have been collected from a variety of urban combustion sources, including an industrial boiler, a fireplace, automobiles, diesel trucks, gas-fired home appliances, and meat cooking operations, by use of a dilution sampling system. Other sampling techniques have been utilized to collect fine aerosol samples of paved road dust, brake wear, tire wear, cigarette smoke, tar pot emissions, and vegetative detritus. The organic matter contained in each of these samples has been analyzed via high-resolution gas chromatography. By use of a simple computational approach, a quantitative, 50-parameter characterization of the elutable fine organic aerosol emitted from each source type has been determined. The organic mass distribution fingerprints obtained by this approach are shown to differ significantly from each other for most of the source types tested, using hierarchical cluster analysis

  12. The EADC-ADNI Harmonized Protocol for manual hippocampal segmentation on magnetic resonance

    DEFF Research Database (Denmark)

    Frisoni, Giovanni B; Jack, Clifford R; Bocchetta, Martina

    2015-01-01

    BACKGROUND: An international Delphi panel has defined a harmonized protocol (HarP) for the manual segmentation of the hippocampus on MR. The aim of this study is to study the concurrent validity of the HarP toward local protocols, and its major sources of variance. METHODS: Fourteen tracers segme...

  13. Characterization of perovskite solar cells: Towards a reliable measurement protocol

    Directory of Open Access Journals (Sweden)

    Eugen Zimmermann

    2016-09-01

    Full Text Available Lead halide perovskite solar cells have shown a tremendous rise in power conversion efficiency with reported record efficiencies of over 20% making this material very promising as a low cost alternative to conventional inorganic solar cells. However, due to a differently severe “hysteretic” behaviour during current density-voltage measurements, which strongly depends on scan rate, device and measurement history, preparation method, device architecture, etc., commonly used solar cell measurements do not give reliable or even reproducible results. For the aspect of commercialization and the possibility to compare results of different devices among different laboratories, it is necessary to establish a measurement protocol which gives reproducible results. Therefore, we compare device characteristics derived from standard current density-voltage measurements with stabilized values obtained from an adaptive tracking of the maximum power point and the open circuit voltage as well as characteristics extracted from time resolved current density-voltage measurements. Our results provide insight into the challenges of a correct determination of device performance and propose a measurement protocol for a reliable characterisation which is easy to implement and has been tested on varying perovskite solar cells fabricated in different laboratories.

  14. Online Chemical Characterization of Food-Cooking Organic Aerosols: Implications for Source Apportionment.

    Science.gov (United States)

    Reyes-Villegas, Ernesto; Bannan, Thomas; Le Breton, Michael; Mehra, Archit; Priestley, Michael; Percival, Carl; Coe, Hugh; Allan, James D

    2018-04-11

    Food-cooking organic aerosols (COA) are one of the primary sources of submicron particulate matter in urban environments. However, there are still many questions surrounding source apportionment related to instrumentation as well as semivolatile partitioning because COA evolve rapidly in the ambient air, making source apportionment more complex. Online measurements of emissions from cooking different types of food were performed in a laboratory to characterize particles and gases. Aerosol mass spectrometer (AMS) measurements showed that the relative ionization efficiency for OA was higher (1.56-3.06) relative to a typical value of 1.4, concluding that AMS is over-estimating COA and suggesting that previous studies likely over-estimated COA concentrations. Food-cooking mass spectra were generated using AMS, and gas and particle food markers were identified with filter inlets for gases and aerosols-chemical ionization mass spectrometer (CIMS) measurements to be used in future food cooking-source apportionment studies. However, there is a considerable variability in both gas and particle markers, and dilution plays an important role in the particle mass budget, showing the importance of using these markers with caution during receptor modeling. These findings can be used to better understand the chemical composition of COA, and they provides useful information to be used in future source-apportionment studies.

  15. Methodology and main results of seismic source characterization for the PEGASOS Project, Switzerland

    International Nuclear Information System (INIS)

    Coppersmith, K. J.; Youngs, R. R.; Sprecher, Ch.

    2009-01-01

    Under the direction of the National Cooperative for the Disposal of Radioactive Waste (NAGRA), a probabilistic seismic hazard analysis was conducted for the Swiss nuclear power plant sites. The study has become known under the name 'PEGASOS Project'. This is the first of a group of papers in this volume that describes the seismic source characterization methodology and the main results of the project. A formal expert elicitation process was used, including dissemination of a comprehensive database, multiple workshops for identification and discussion of alternative models and interpretations, elicitation interviews, feedback to provide the experts with the implications of their preliminary assessments, and full documentation of the assessments. A number of innovative approaches to the seismic source characterization methodology were developed by four expert groups and implemented in the study. The identification of epistemic uncertainties and treatment using logic trees were important elements of the assessments. Relative to the assessment of the seismotectonic framework, the four expert teams identified similar main seismotectonic elements: the Rhine Graben, the Jura / Molasse regions, Helvetic and crystalline subdivisions of the Alps, and the southern Germany region. In defining seismic sources, the expert teams used a variety of approaches. These range from large regional source zones having spatially-smoothed seismicity to smaller local zones, to account for spatial variations in observed seismicity. All of the teams discussed the issue of identification of feature-specific seismic sources (i.e. individual mapped faults) as well as the potential reactivation of the boundary faults of the Permo-Carboniferous grabens. Other important seismic source definition elements are the specification of earthquake rupture dimensions and the earthquake depth distribution. Maximum earthquake magnitudes were assessed for each seismic source using approaches that consider the

  16. Two Phases Authentication Level (TPAL) protocol for nodes ...

    African Journals Online (AJOL)

    ... node may contain sensitive informations such as military data and monitoring data. ... LLN is a kind of Internet of Things (IoT) network with limited power source ... Current authentication Internet protocols cannot be adopted directly into LLN ...

  17. A slotted access control protocol for metropolitan WDM ring networks

    Science.gov (United States)

    Baziana, P. A.; Pountourakis, I. E.

    2009-03-01

    In this study we focus on the serious scalability problems that many access protocols for WDM ring networks introduce due to the use of a dedicated wavelength per access node for either transmission or reception. We propose an efficient slotted MAC protocol suitable for WDM ring metropolitan area networks. The proposed network architecture employs a separate wavelength for control information exchange prior to the data packet transmission. Each access node is equipped with a pair of tunable transceivers for data communication and a pair of fixed tuned transceivers for control information exchange. Also, each access node includes a set of fixed delay lines for synchronization reasons; to keep the data packets, while the control information is processed. An efficient access algorithm is applied to avoid both the data wavelengths and the receiver collisions. In our protocol, each access node is capable of transmitting and receiving over any of the data wavelengths, facing the scalability issues. Two different slot reuse schemes are assumed: the source and the destination stripping schemes. For both schemes, performance measures evaluation is provided via an analytic model. The analytical results are validated by a discrete event simulation model that uses Poisson traffic sources. Simulation results show that the proposed protocol manages efficient bandwidth utilization, especially under high load. Also, comparative simulation results prove that our protocol achieves significant performance improvement as compared with other WDMA protocols which restrict transmission over a dedicated data wavelength. Finally, performance measures evaluation is explored for diverse numbers of buffer size, access nodes and data wavelengths.

  18. Quantum key distribution with entangled photon sources

    International Nuclear Information System (INIS)

    Ma Xiongfeng; Fung, Chi-Hang Fred; Lo, H.-K.

    2007-01-01

    A parametric down-conversion (PDC) source can be used as either a triggered single-photon source or an entangled-photon source in quantum key distribution (QKD). The triggering PDC QKD has already been studied in the literature. On the other hand, a model and a post-processing protocol for the entanglement PDC QKD are still missing. We fill in this important gap by proposing such a model and a post-processing protocol for the entanglement PDC QKD. Although the PDC model is proposed to study the entanglement-based QKD, we emphasize that our generic model may also be useful for other non-QKD experiments involving a PDC source. Since an entangled PDC source is a basis-independent source, we apply Koashi and Preskill's security analysis to the entanglement PDC QKD. We also investigate the entanglement PDC QKD with two-way classical communications. We find that the recurrence scheme increases the key rate and the Gottesman-Lo protocol helps tolerate higher channel losses. By simulating a recent 144-km open-air PDC experiment, we compare three implementations: entanglement PDC QKD, triggering PDC QKD, and coherent-state QKD. The simulation result suggests that the entanglement PDC QKD can tolerate higher channel losses than the coherent-state QKD. The coherent-state QKD with decoy states is able to achieve highest key rate in the low- and medium-loss regions. By applying the Gottesman-Lo two-way post-processing protocol, the entanglement PDC QKD can tolerate up to 70 dB combined channel losses (35 dB for each channel) provided that the PDC source is placed in between Alice and Bob. After considering statistical fluctuations, the PDC setup can tolerate up to 53 dB channel losses

  19. Strategies for lidar characterization of particulates from point and area sources

    Science.gov (United States)

    Wojcik, Michael D.; Moore, Kori D.; Martin, Randal S.; Hatfield, Jerry

    2010-10-01

    Use of ground based remote sensing technologies such as scanning lidar systems (light detection and ranging) has gained traction in characterizing ambient aerosols due to some key advantages such as wide area of regard (10 km2), fast response time, high spatial resolution (University, in conjunction with the USDA-ARS, has developed a three-wavelength scanning lidar system called Aglite that has been successfully deployed to characterize particle motion, concentration, and size distribution at both point and diffuse area sources in agricultural and industrial settings. A suite of massbased and size distribution point sensors are used to locally calibrate the lidar. Generating meaningful particle size distribution, mass concentration, and emission rate results based on lidar data is dependent on strategic onsite deployment of these point sensors with successful local meteorological measurements. Deployment strategies learned from field use of this entire measurement system over five years include the characterization of local meteorology and its predictability prior to deployment, the placement of point sensors to prevent contamination and overloading, the positioning of the lidar and beam plane to avoid hard target interferences, and the usefulness of photographic and written observational data.

  20. QoS-aware self-adaptation of communication protocols in a pervasive service middleware

    DEFF Research Database (Denmark)

    Zhang, Weishan; Hansen, Klaus Marius; Fernandes, João

    2010-01-01

    Pervasive computing is characterized by heterogeneous devices that usually have scarce resources requiring optimized usage. These devices may use different communication protocols which can be switched at runtime. As different communication protocols have different quality of service (Qo......S) properties, this motivates optimized self-adaption of protocols for devices, e.g., considering power consumption and other QoS requirements, e.g. round trip time (RTT) for service invocations, throughput, and reliability. In this paper, we present an extensible approach for self-adaptation of communication...... protocols for pervasive web services, where protocols are designed as reusable connectors and our middleware infrastructure can hide the complexity of using different communication protocols to upper layers. We also propose to use Genetic Algorithms (GAs) to find optimized configurations at runtime...

  1. Performance Improvement Based Authentication Protocol for Intervessel Traffic Service Data Exchange Format Protocol Based on U-Navigation System in WoT Environment

    Directory of Open Access Journals (Sweden)

    Byunggil Lee

    2014-01-01

    Full Text Available International Association of Lighthouse Authorities (IALA is developing the standard intersystem VTS exchange format (IVEF protocol for exchange of navigation and vessel information between VTS systems and between VTS and vessels. VTS (vessel traffic system is an important marine traffic monitoring system which is designed to improve the safety and efficiency of navigation and the protection of the marine environment. And the demand of Inter-VTS networking has been increased for realization of e-Navigation as shore side collaboration for maritime safety. And IVEF (inter-VTS data exchange format for inter-VTS network has become a hot research topic of VTS system. Currently, the IVEF developed by the International Association of Lighthouse Authorities (IALA does not include any highly trusted certification technology for the connectors. The output of standardization is distributed as the IALA recommendation V-145, and the protocol is implemented with an open source. The IVEF open source, however, is the code used to check the functions of standard protocols. It is too slow to be used in the field and requires a large memory. And the vessel traffic information requires high security since it is highly protected by the countries. Therefore, this paper suggests the authentication protocol to increase the security of the VTS systems using the main certification server and IVEF.

  2. Experimental development of a new protocol for extraction and characterization of microplastics in fish tissues: First observations in commercial species from Adriatic Sea.

    Science.gov (United States)

    Avio, Carlo Giacomo; Gorbi, Stefania; Regoli, Francesco

    2015-10-01

    The presence of microplastics in the marine environment has raised scientific interest during the last decade. Several organisms can ingest microplastics with potentially adverse effects on the digestive tract, respiratory system and locomotory appendages. However, a clear evidence of tissue accumulation and transfer of such microparticles in wild organisms is still lacking, partially hampered by technical difficulties in isolation and characterization protocols from biological samples. In this work, we compared the efficacy of some existing approaches and we optimized a new protocol allowing an extraction yield of microplastics from fish tissues ranging between 78% and 98%, depending on the polymer size. FT-IR analyses confirmed that the extraction procedure did not affect the particles characteristics. The method was further validated on the fish mullet, Mugil cephalus, exposed under laboratory conditions to polystyrene and polyethylene; the particles were isolated and quantified in stomach and liver, and their presence in the hepatic tissue was confirmed also by histological analyses. A preliminary characterization revealed the presence and distribution of microplastics in various fish species collected along the Adriatic Sea. FT-IR analyses indicated polyethylene as the predominant polymer (65%) in the stomach of fish. The overall results confirmed the newly developed method as a reliable approach to detect and quantify microplastics in the marine biota. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Characterization of urban aerosol sources in Debrecen, Hungary

    International Nuclear Information System (INIS)

    Kertesz, Zs.; Szoboszlai, T.; Angyal, A.; Dobos, E.; Borbely-Kiss, I.

    2009-01-01

    Complete text of publication follows. Aerosol pollution represents significant health hazard in urban environments. Despite the fact that Debrecen has not a much stressed environment the city is highly exposed to aerosol pollution. In order to evaluate the impact of aerosol particles on health, the knowledge of the particle size distribution, chemical composition, sources, and their change in time and space is needed. This work presents a source apportionment study of fine (particles with aerodynamic diameter less than 2.5 μm) and coarse (particles with aerodynamic diameter between 2.5 and 10 μm) particulate matter in Debrecen by following the evolution of the elemental components with hourly time resolution. The variation of the elemental concentrations, their periodicity, correlation with other elements and meteorological parameters were studied on samples collected in different seasons. Aerosol sources were determined using the positive matrix factorization (PMF) method. Aerosol samples were collected in the garden of the ATOMKI with a 2-stage sequential streaker sampler manufactured by PIXE International, which collected the fine and coarse fraction separately with few hours' time resolution. Between October 2007 and January 2009 five 10-days long sampling campaigns were carried out. The elemental composition was determined by Particle Induced X-ray emission (PIXE) for Z ≥ 13, and the elemental carbon (BC) content was estimated with a smoke stain reflectometer. Source apportionment was carried out with the PMF receptor model developed for aerosol source characterization, provided by US EPA. Mass of species apportioned to factor, percentage of species apportioned to factors and average factor contributions of the campaigns, of working days and weekends and within the days were calculated. The PMF analysis resulted seven factors in the fine and seven factors in the coarse mode. The main sources of atmospheric aerosol in the city of Debrecen were traffic

  4. A protocol for eliciting nonmaterial values through a cultural ecosystem services frame.

    Science.gov (United States)

    Gould, Rachelle K; Klain, Sarah C; Ardoin, Nicole M; Satterfield, Terre; Woodside, Ulalia; Hannahs, Neil; Daily, Gretchen C; Chan, Kai M

    2015-04-01

    Stakeholders' nonmaterial desires, needs, and values often critically influence the success of conservation projects. These considerations are challenging to articulate and characterize, resulting in their limited uptake in management and policy. We devised an interview protocol designed to enhance understanding of cultural ecosystem services (CES). The protocol begins with discussion of ecosystem-related activities (e.g., recreation, hunting) and management and then addresses CES, prompting for values encompassing concepts identified in the Millennium Ecosystem Assessment (2005) and explored in other CES research. We piloted the protocol in Hawaii and British Columbia. In each location, we interviewed 30 individuals from diverse backgrounds. We analyzed results from the 2 locations to determine the effectiveness of the interview protocol in elucidating nonmaterial values. The qualitative and spatial components of the protocol helped characterize cultural, social, and ethical values associated with ecosystems in multiple ways. Maps and situational, or vignette-like, questions helped respondents articulate difficult-to-discuss values. Open-ended prompts allowed respondents to express a diversity of ecosystem-related values and proved sufficiently flexible for interviewees to communicate values for which the protocol did not explicitly probe. Finally, the results suggest that certain values, those mentioned frequently throughout the interview, are particularly salient for particular populations. The protocol can provide efficient, contextual, and place-based data on the importance of particular ecosystem attributes for human well-being. Qualitative data are complementary to quantitative and spatial assessments in the comprehensive representation of people's values pertaining to ecosystems, and this protocol may assist in incorporating values frequently overlooked in decision making processes. © 2014 The Authors. Conservation Biology published by Wiley Periodicals

  5. Characterization of the radon source in North-Central Florida. Final report part 1 -- Final project report; Final report part 2 -- Technical report

    International Nuclear Information System (INIS)

    1997-01-01

    This report contains two separate parts: Characterization of the Radon Source in North-Central Florida (final report part 1 -- final project report); and Characterization of the Radon Source in North-Central Florida (technical report). The objectives were to characterize the radon 222 source in a region having a demonstrated elevated indoor radon potential and having geology, lithology, and climate that are different from those in other regions of the U.S. where radon is being studied. Radon availability and transport in this region were described. Approaches for predicting the radon potential of lands in this region were developed

  6. Comprehensive low-dose imaging of carotid and coronary arteries with a single-injection dual-source CT angiography protocol.

    Science.gov (United States)

    Tognolini, A; Arellano, C S; Marfori, W; Heidari, G; Sayre, J W; Krishnam, M S; Ruehm, S G

    2014-03-01

    To assess the feasibility of a fast single-bolus combined carotid and coronary computed tomography angiography (CTA) protocol in asymptomatic patients. Thirty-three consecutive patients (18 women and 15 men) with a median age of 61 ± 14 years old (range 37-87 years) with known or suspected atherosclerotic disease were enrolled in this prospective study. A single breath-hold, single biphasic injection protocol (50 ml at 3 ml/s, 50 ml at 5 ml/s, 50 ml saline flush at 5 ml/s) was used for combined CTA imaging of the supra-aortic (SAA) and coronary arteries (CA) on a 64-slice dual-source CT (DSCT) machine. Helical CTA acquisition of the SAA was followed by prospective electrocardiography (ECG)-triggered coronary CTA. Subjective (four-point scale) image quality and objective signal-to-noise (SNR) and contrast-to-noise (CNR) measurements were performed. Vascular disease was graded on a four-point scale (grade 1: absent; grade 2: mild, grade 3: moderate; grade 4: severe). The radiation dose was recorded for each patient. The average enhancement and subjective quality score of SAA and CA segments were 396 HU/358 HU and 1.2 ± 0.3/1.72 ± 0.4, respectively. The SNR was 27.1 ± 1.7 in the SAA and 21.6 ± 1.6 in the CA (p grade 2 and 1/33 (3%) had grade 3 disease. CA findings were as follows: 25/33 (76%) showed no disease and 6/33 (18%) patients had grade 2 and 2/33 (6%) had grade 3 disease. Five patients had disease in both districts. The average radiation dose for the combined CTA angiogram was 4.3 ± 0.6 mSv. A fast, low-dose combined DSCT angiography protocol appears technically feasible for imaging carotid and coronary atherosclerotic disease. Copyright © 2013 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  8. Comparison of Diffusion MRI Acquisition Protocols for the In Vivo Characterization of the Mouse Spinal Cord: Variability Analysis and Application to an Amyotrophic Lateral Sclerosis Model.

    Science.gov (United States)

    Figini, Matteo; Scotti, Alessandro; Marcuzzo, Stefania; Bonanno, Silvia; Padelli, Francesco; Moreno-Manzano, Victoria; García-Verdugo, José Manuel; Bernasconi, Pia; Mantegazza, Renato; Bruzzone, Maria Grazia; Zucca, Ileana

    2016-01-01

    Diffusion-weighted Magnetic Resonance Imaging (dMRI) has relevant applications in the microstructural characterization of the spinal cord, especially in neurodegenerative diseases. Animal models have a pivotal role in the study of such diseases; however, in vivo spinal dMRI of small animals entails additional challenges that require a systematical investigation of acquisition parameters. The purpose of this study is to compare three acquisition protocols and identify the scanning parameters allowing a robust estimation of the main diffusion quantities and a good sensitivity to neurodegeneration in the mouse spinal cord. For all the protocols, the signal-to-noise and contrast-to noise ratios and the mean value and variability of Diffusion Tensor metrics were evaluated in healthy controls. For the estimation of fractional anisotropy less variability was provided by protocols with more diffusion directions, for the estimation of mean, axial and radial diffusivity by protocols with fewer diffusion directions and higher diffusion weighting. Intermediate features (12 directions, b = 1200 s/mm2) provided the overall minimum inter- and intra-subject variability in most cases. In order to test the diagnostic sensitivity of the protocols, 7 G93A-SOD1 mice (model of amyotrophic lateral sclerosis) at 10 and 17 weeks of age were scanned and the derived diffusion parameters compared with those estimated in age-matched healthy animals. The protocols with an intermediate or high number of diffusion directions provided the best differentiation between the two groups at week 17, whereas only few local significant differences were highlighted at week 10. According to our results, a dMRI protocol with an intermediate number of diffusion gradient directions and a relatively high diffusion weighting is optimal for spinal cord imaging. Further work is needed to confirm these results and for a finer tuning of acquisition parameters. Nevertheless, our findings could be important for the

  9. An electronic specimen collection protocol schema (eSCPS). Document architecture for specimen management and the exchange of specimen collection protocols between biobanking information systems.

    Science.gov (United States)

    Eminaga, O; Semjonow, A; Oezguer, E; Herden, J; Akbarov, I; Tok, A; Engelmann, U; Wille, S

    2014-01-01

    The integrity of collection protocols in biobanking is essential for a high-quality sample preparation process. However, there is not currently a well-defined universal method for integrating collection protocols in the biobanking information system (BIMS). Therefore, an electronic schema of the collection protocol that is based on Extensible Markup Language (XML) is required to maintain the integrity and enable the exchange of collection protocols. The development and implementation of an electronic specimen collection protocol schema (eSCPS) was performed at two institutions (Muenster and Cologne) in three stages. First, we analyzed the infrastructure that was already established at both the biorepository and the hospital information systems of these institutions and determined the requirements for the sufficient preparation of specimens and documentation. Second, we designed an eSCPS according to these requirements. Finally, a prospective study was conducted to implement and evaluate the novel schema in the current BIMS. We designed an eSCPS that provides all of the relevant information about collection protocols. Ten electronic collection protocols were generated using the supplementary Protocol Editor tool, and these protocols were successfully implemented in the existing BIMS. Moreover, an electronic list of collection protocols for the current studies being performed at each institution was included, new collection protocols were added, and the existing protocols were redesigned to be modifiable. The documentation time was significantly reduced after implementing the eSCPS (5 ± 2 min vs. 7 ± 3 min; p = 0.0002). The eSCPS improves the integrity and facilitates the exchange of specimen collection protocols in the existing open-source BIMS.

  10. Characterization of emissions sources in the California-Mexico Border Region during Cal-Mex 2010

    Science.gov (United States)

    Zavala, M. A.; Lei, W.; Li, G.; Bei, N.; Barrera, H.; Tejeda, D.; Molina, L. T.; Cal-Mex 2010 Emissions Team

    2010-12-01

    The California-Mexico border region provides an opportunity to evaluate the characteristics of the emission processes in rapidly expanding urban areas where intensive international trade and commerce activities occur. Intense anthropogenic activities, biomass burning, as well as biological and geological sources significantly contribute to high concentration levels of particulate matter (PM), polycyclic aromatic hydrocarbons (PAHs), nitrogen oxides (NOx), volatile organic compounds (VOCs), air toxics, and ozone observed in the California-US Baja California-Mexico border region. The continued efforts by Mexico and US for improving and updating the emissions inventories in the sister cities of San Diego-Tijuana and Calexico-Mexicali has helped to understand the emission processes in the border region. In addition, the recent Cal-Mex 2010 field campaign included a series of measurements aimed at characterizing the emissions from major sources in the California-Mexico border region. In this work we will present our analyzes of the data obtained during Cal-Mex 2010 for the characterization of the emission sources and their use for the evaluation of the recent emissions inventories for the Mexican cities of Tijuana and Mexicali. The developed emissions inventories will be implemented in concurrent air quality modeling efforts for understanding the physical and chemical transformations of air pollutants in the California-Mexico border region and their impacts.

  11. Characterizing the Performance of the Princeton Advanced Test Stand Ion Source

    Science.gov (United States)

    Stepanov, A.; Gilson, E. P.; Grisham, L.; Kaganovich, I.; Davidson, R. C.

    2012-10-01

    The Princeton Advanced Test Stand (PATS) is a compact experimental facility for studying the physics of intense beam-plasma interactions relevant to the Neutralized Drift Compression Experiment - II (NDCX-II). The PATS facility consists of a multicusp RF ion source mounted on a 2 m-long vacuum chamber with numerous ports for diagnostic access. Ar+ beams are extracted from the source plasma with three-electrode (accel-decel) extraction optics. The RF power and extraction voltage (30 - 100 kV) are pulsed to produce 100 μsec duration beams at 0.5 Hz with excellent shot-to-shot repeatability. Diagnostics include Faraday cups, a double-slit emittance scanner, and scintillator imaging. This work reports measurements of beam parameters for a range of beam energies (30 - 50 keV) and currents to characterize the behavior of the ion source and extraction optics. Emittance scanner data is used to calculate the beam trace-space distribution and corresponding transverse emittance. If the plasma density is changing during a beam pulse, time-resolved emittance scanner data has been taken to study the corresponding evolution of the beam trace-space distribution.

  12. A new testing protocol for zirconia dental implants.

    Science.gov (United States)

    Sanon, Clarisse; Chevalier, Jérôme; Douillard, Thierry; Cattani-Lorente, Maria; Scherrer, Susanne S; Gremillard, Laurent

    2015-01-01

    Based on the current lack of standards concerning zirconia dental implants, we aim at developing a protocol to validate their functionality and safety prior their clinical use. The protocol is designed to account for the specific brittle nature of ceramics and the specific behavior of zirconia in terms of phase transformation. Several types of zirconia dental implants with different surface textures (porous, alveolar, rough) were assessed. The implants were first characterized in their as-received state by Scanning Electron Microscopy (SEM), Focused Ion Beam (FIB), X-Ray Diffraction (XRD). Fracture tests following a method adapted from ISO 14801 were conducted to evaluate their initial mechanical properties. Accelerated aging was performed on the implants, and XRD monoclinic content measured directly at their surface instead of using polished samples as in ISO 13356. The implants were then characterized again after aging. Implants with an alveolar surface presented large defects. The protocol shows that such defects compromise the long-term mechanical properties. Implants with a porous surface exhibited sufficient strength but a significant sensitivity to aging. Even if associated to micro cracking clearly observed by FIB, aging did not decrease mechanical strength of the implants. As each dental implant company has its own process, all zirconia implants may behave differently, even if the starting powder is the same. Especially, surface modifications have a large influence on strength and aging resistance, which is not taken into account by the current standards. Protocols adapted from this work could be useful. Copyright © 2014 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  13. A Lightweight Protocol for Secure Video Streaming.

    Science.gov (United States)

    Venčkauskas, Algimantas; Morkevicius, Nerijus; Bagdonas, Kazimieras; Damaševičius, Robertas; Maskeliūnas, Rytis

    2018-05-14

    The Internet of Things (IoT) introduces many new challenges which cannot be solved using traditional cloud and host computing models. A new architecture known as fog computing is emerging to address these technological and security gaps. Traditional security paradigms focused on providing perimeter-based protections and client/server point to point protocols (e.g., Transport Layer Security (TLS)) are no longer the best choices for addressing new security challenges in fog computing end devices, where energy and computational resources are limited. In this paper, we present a lightweight secure streaming protocol for the fog computing "Fog Node-End Device" layer. This protocol is lightweight, connectionless, supports broadcast and multicast operations, and is able to provide data source authentication, data integrity, and confidentiality. The protocol is based on simple and energy efficient cryptographic methods, such as Hash Message Authentication Codes (HMAC) and symmetrical ciphers, and uses modified User Datagram Protocol (UDP) packets to embed authentication data into streaming data. Data redundancy could be added to improve reliability in lossy networks. The experimental results summarized in this paper confirm that the proposed method efficiently uses energy and computational resources and at the same time provides security properties on par with the Datagram TLS (DTLS) standard.

  14. Characterizing the hypersiliceous rocks of Belgium used in (pre-)history: a case study on sourcing sedimentary quartzites

    International Nuclear Information System (INIS)

    Veldeman, Isis; Baele, Jean-Marc; De Doncker, H W J A; Goemaere, Eric; Deceukelaire, Marleen; Dusar, Michiel

    2012-01-01

    Tracking raw material back to its extraction source is a crucial step for archaeologists when trying to deduce migration patterns and trade contacts in (pre-)history. Regarding stone artefacts, the main rock types encountered in the archaeological record of Belgium are hypersiliceous rocks. This is a newly introduced category of rock types comprising those rocks made of at least 90% silica. These are strongly silicified quartz sands or sedimentary quartzites, siliceous rocks of chemical and biochemical origin (e.g. flint), very pure metamorphic quartzites and siliceous volcanic rocks (e.g. obsidian). To be able to distinguish between different extraction sources, ongoing research was started to locate possible extraction sources of hypersiliceous rocks and to characterize rocks collected from these sources. Characterization of these hypersiliceous rocks is executed with the aid of optical polarizing microscopy, optical cold cathodoluminescence and scanning-electron microscopy combined with energy-dispersive x-ray spectrometry and with back-scatter electron imaging. In this paper, we focus on various sedimentary quartzites of Paleogene stratigraphical level. (paper)

  15. BEAMLINE-CONTROLLED STEERING OF SOURCE-POINT ANGLE AT THE ADVANCED PHOTON SOURCE

    Energy Technology Data Exchange (ETDEWEB)

    Emery, L.; Fystro, G.; Shang, H.; Smith, M.

    2017-06-25

    An EPICS-based steering software system has been implemented for beamline personnel to directly steer the angle of the synchrotron radiation sources at the Advanced Photon Source. A script running on a workstation monitors "start steering" beamline EPICS records, and effects a steering given by the value of the "angle request" EPICS record. The new system makes the steering process much faster than before, although the older steering protocols can still be used. The robustness features of the original steering remain. Feedback messages are provided to the beamlines and the accelerator operators. Underpinning this new steering protocol is the recent refinement of the global orbit feedback process whereby feedforward of dipole corrector set points and orbit set points are used to create a local steering bump in a rapid and seamless way.

  16. Detector-device-independent quantum secret sharing with source flaws.

    Science.gov (United States)

    Yang, Xiuqing; Wei, Kejin; Ma, Haiqiang; Liu, Hongwei; Yin, Zhenqiang; Cao, Zhu; Wu, Lingan

    2018-04-10

    Measurement-device-independent entanglement witness (MDI-EW) plays an important role for detecting entanglement with untrusted measurement device. We present a double blinding-attack on a quantum secret sharing (QSS) protocol based on GHZ state. Using the MDI-EW method, we propose a QSS protocol against all detector side-channels. We allow source flaws in practical QSS system, so that Charlie can securely distribute a key between the two agents Alice and Bob over long distances. Our protocol provides condition on the extracted key rate for the secret against both external eavesdropper and arbitrary dishonest participants. A tight bound for collective attacks can provide good bounds on the practical QSS with source flaws. Then we show through numerical simulations that using single-photon source a secure QSS over 136 km can be achieved.

  17. Characterization of DNAPL Source Zone Architecture and Prediction of Associated Plume Response: Progress and Perspectives

    Science.gov (United States)

    Abriola, L. M.; Pennell, K. D.; Ramsburg, C. A.; Miller, E. L.; Christ, J.; Capiro, N. L.; Mendoza-Sanchez, I.; Boroumand, A.; Ervin, R. E.; Walker, D. I.; Zhang, H.

    2012-12-01

    It is now widely recognized that the distribution of contaminant mass will control both the evolution of aqueous phase plumes and the effectiveness of many source zone remediation technologies at sites contaminated by dense nonaqueous phase liquids (DNAPLs). Advances in the management of sites containing DNAPL source zones, however, are currently hampered by the difficulty associated with characterizing subsurface DNAPL 'architecture'. This presentation provides an overview of recent research, integrating experimental and mathematical modeling studies, designed to improve our ability to characterize DNAPL distributions and predict associated plume response. Here emphasis is placed on estimation of the most information-rich DNAPL architecture metrics, through a combination of localized in situ tests and more readily available plume transect concentration observations. Estimated metrics will then serve as inputs to an upscaled screening model for prediction of long term plume response. Machine learning techniques were developed and refined to identify a variety of source zone metrics and associated confidence intervals through the processing of down gradient concentration data. Estimated metrics include the volumes and volume percentages of DNAPL in pools and ganglia, as well as their ratio (pool fraction). Multiphase flow and transport simulations provided training data for model development and assessment that are representative of field-scale DNAPL source zones and their evolving plumes. Here, a variety of release and site heterogeneity (sequential Gaussian permeability) conditions were investigated. Push-pull tracer tests were also explored as a means to provide localized in situ observations to refine these metric estimates. Here, two-dimensional aquifer cell experiments and mathematical modeling were used to quantify upscaled interphase mass transfer rates and the interplay between injection and extraction rates, local source zone architecture, and tracer

  18. Cluster Based Hierarchical Routing Protocol for Wireless Sensor Network

    OpenAIRE

    Rashed, Md. Golam; Kabir, M. Hasnat; Rahim, Muhammad Sajjadur; Ullah, Shaikh Enayet

    2012-01-01

    The efficient use of energy source in a sensor node is most desirable criteria for prolong the life time of wireless sensor network. In this paper, we propose a two layer hierarchical routing protocol called Cluster Based Hierarchical Routing Protocol (CBHRP). We introduce a new concept called head-set, consists of one active cluster head and some other associate cluster heads within a cluster. The head-set members are responsible for control and management of the network. Results show that t...

  19. Characterization of mechanical behavior of a porcine pulmonary artery strip using a randomized uniaxial stretch and stretch-rate protocol

    Directory of Open Access Journals (Sweden)

    Criscione John C

    2008-01-01

    Full Text Available Abstract Background Much of the experimental work in soft tissue mechanics has been focused on fitting approximate relations for specific tissue types from aggregate data on multiple samples of the tissue. Such relations are needed for modeling applications and have reasonable predictability – especially given the natural variance in specimens. There is, however, much theoretical and experimental work to be done in determining constitutive behaviors for particular specimens and tissues. In so doing, it may be possible to exploit the natural variation in tissue ultrastructure – so to relate ultrastructure composition to tissue behavior. Thus, this study focuses on an experimental method for determining constitutive behaviors and illustrates the method with analysis of a porcine pulmonary artery strip. The method characterizes the elastic part of the response (implicitly in terms of stretch and the inelastic part in terms of short term stretch history (i.e., stretch-rate Ht2, longer term stretch history Ht1, and time since the start of testing T. Methods A uniaxial testing protocol with a random stretch and random stretch-rate was developed. The average stress at a particular stretch was chosen as the hyperelastic stress response, and deviation from the mean at this particular stretch is chosen as the inelastic deviation. Multivariable Linear Regression Analysis (MLRA was utilized to verify if Ht2, Ht1, and T are important factors for characterizing the inelastic deviation. For acquiring Ht2 and Ht1, an integral function type of stretch history was employed with time constants chosen from the relaxation spectrum of an identical size strip from the same tissue with the same orientation. Finally, statistical models that characterize the inelasticity were developed at various, nominal values of stretch, and their predictive capability was examined. Results Inelastic deviation from hyperelasticity was high (31% for low stretch and declined

  20. Implementation of two-party protocols in the noisy-storage model

    International Nuclear Information System (INIS)

    Wehner, Stephanie; Curty, Marcos; Schaffner, Christian; Lo, Hoi-Kwong

    2010-01-01

    The noisy-storage model allows the implementation of secure two-party protocols under the sole assumption that no large-scale reliable quantum storage is available to the cheating party. No quantum storage is thereby required for the honest parties. Examples of such protocols include bit commitment, oblivious transfer, and secure identification. Here, we provide a guideline for the practical implementation of such protocols. In particular, we analyze security in a practical setting where the honest parties themselves are unable to perform perfect operations and need to deal with practical problems such as errors during transmission and detector inefficiencies. We provide explicit security parameters for two different experimental setups using weak coherent, and parametric down-conversion sources. In addition, we analyze a modification of the protocols based on decoy states.

  1. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  2. Characterization of a novel x-ray source: The MIRRORCLE-6X system

    International Nuclear Information System (INIS)

    Gambaccini, M.; Marziani, M.; Taibi, A.; Cardarelli, P.; Di Domenico, G.; Mastella, E.

    2012-01-01

    MIRRORCLE is a tabletop synchrotron light source being investigated within an EC funded project named LABSYNC. To evaluate the potential of this novel x-ray source for medical imaging applications, a set of measurements was performed at the MIRRORCLE factory in Japan. In particular, the aim of this work was to characterize the proposed compact x-ray source by determining different parameters, such as the intensity of the broad spectra produced with thin wire targets, the size of the focal spot and its distribution. The average electron-beam impact current on wire targets was calculated by several methods and it was demonstrated to be in the range 0.5-1.0μA. By comparing these values with data available for conventional x-ray tubes, the current needed to achieve the same fluence as in a standard diagnostic examination was estimated to be about 0.1-0.5 mA. Finally, results from the measurements of the electron-beam impact cross-section on the target suggested that the diameter of the electron beam circulating in the storage ring is about 6 mm.

  3. Biotechnological Production of Docosahexaenoic Acid Using Aurantiochytrium limacinum: Carbon Sources Comparison And Growth Characterization

    Directory of Open Access Journals (Sweden)

    Sergi Abad

    2015-12-01

    Full Text Available Aurantiochytrium limacinum, a marine heterotrophic protist/microalga has shown interesting yields of docosahexaenoic acid (DHA when cultured with different carbon sources: glucose, pure and crude glycerol. A complete study in a lab-scale fermenter allowed for the characterization and comparison of the growth kinetic parameters corresponding to each carbon source. Artificial Marine Medium (AMM with glucose, pure and crude glycerol offered similar biomass yields. The net growth rates (0.10–0.12 h−1, biomass (0.7–0.8 g cells/g Substrate and product (0.14–0.15 g DHA/g cells yields, as well as DHA productivity were similar using the three carbon sources. Viable potential applications to valorize crude glycerol are envisioned to avoid an environmental problem due to the excess of byproduct.

  4. Feasibility and radiation dose of high-pitch acquisition protocols in patients undergoing dual-source cardiac CT.

    Science.gov (United States)

    Sommer, Wieland H; Albrecht, Edda; Bamberg, Fabian; Schenzle, Jan C; Johnson, Thorsten R; Neumaier, Klement; Reiser, Maximilian F; Nikolaou, Konstatin

    2010-12-01

    The objective of this study was to compare image quality and radiation dose between high-pitch and established retrospectively and prospectively gated cardiac CT protocols using an Alderson-Rando phantom and a set of patients. An anthropomorphic Alderson-Rando phantom equipped with thermoluminiscent detectors and a set of clinical patients underwent the following cardiac CT protocols: high-pitch acquisition (pitch 3.4), prospectively triggered acquisition, and retrospectively gated acquisition (pitch 0.2). For patients with sinus rhythm below 65 beats per minute (bpm), high-pitch protocol was used, whereas for patients in sinus rhythm between 65 and 100 bpm, prospective triggering was used. Patients with irregular heart rates or heart rates of ≥ 100 bpm, were examined using retrospectively gated acquisition. Evaluability of coronary artery segments was determined, and effective radiation dose was derived from the phantom study. In the phantom study, the effective radiation dose as determined with thermoluminescent detector (TLD) measurements was lowest in the high-pitch acquisition (1.21, 3.12, and 11.81 mSv, for the high-pitch, the prospectively triggered, and the retrospectively gated acquisition, respectively). There was a significant difference with respect to the percentage of motion-free coronary artery segments (99%, 87%, and 92% for high-pitch, prospectively triggered, and retrospectively gated, respectively (p pitch protocol (p pitch scans have the potential to reduce radiation dose up to 61.2% and 89.8% compared with prospectively triggered and retrospectively gated scans. High-pitch protocols lead to excellent image quality when used in patients with stable heart rates below 65 bpm.

  5. Progress in characterizing the multidimensional color quality properties of white LED light sources

    Science.gov (United States)

    Teunissen, Kees; Hoelen, Christoph

    2016-03-01

    With the introduction of solid state light sources, the variety in emission spectra is almost unlimited. However, the set of standardized parameters to characterize a white LED light source, such as correlated color temperature (CCT) and CIE general color rendering index (Ra), is known to be limited and insufficient for describing perceived differences between light sources. Several characterization methods have been proposed over the past decades, but their contribution to perceived color quality has not always been validated. To gain more insight in the relevant characteristics of the emission spectra for specific applications, we have conducted a perception experiment to rate the attractiveness of three sets of objects, including fresh food, packaging materials and skin tones. The objects were illuminated with seven different combinations of Red, Green, Blue, Amber and White LEDs, all with the same CCT and illumination level, but with differences in Ra and color saturation. The results show that, in general, object attractiveness does not correlate well with Ra, but shows a positive correlation with saturation increase for two out of three applications. There is no clear relation between saturation and skin tone attractiveness, partly due to differences in preference between males and females. A relative gamut area index (Ga) represents the average change in saturation and a complementary color vector graphic shows the direction and magnitude of chromatic differences for the eight CIE-1974 test-color samples. Together with the CIE general color rendering index (Ra) they provide useful information for designing and optimizing application specific emission spectra.

  6. The Nagoya Protocol: Fragmentation or Consolidation?

    Directory of Open Access Journals (Sweden)

    Carmen Richerzhagen

    2014-02-01

    Full Text Available In October, 2010, a protocol on access and benefit-sharing (ABS of genetic resources was adopted, the so-called Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity. Before the adoption of the Nagoya Protocol, the governance architecture of ABS was already characterized by a multifaceted institutional environment. The use of genetic resources is confronted with many issues (conservation, research and development, intellectual property rights, food security, health issues, climate change that are governed by different institutions and agreements. The Nagoya Protocol contributes to increased fragmentation. However, the question arises whether this new regulatory framework can help to advance the implementation of the ABS provisions of the Convention on Biological Diversity (CBD. This paper attempts to find an answer to that question by following three analytical steps. First, it analyzes the causes of change against the background of theories of institutional change. Second, it aims to assess the typology of the architecture in order to find out if this new set of rules will contribute to a more synergistic, cooperative or conflictive architecture of ABS governance. Third, the paper looks at the problem of “fit” and identifies criteria that can be used to assess the new ABS governance architecture with regard to its effectiveness.

  7. Optimal teleportation with a noisy source

    Energy Technology Data Exchange (ETDEWEB)

    Taketani, Bruno G. [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil); Physikalisches Institut der Albert-Ludwigs-Universitaet, Freiburg im Breisgau (Germany); Melo, Fernando de [Instituut voor Theoretische Fysica, Katholieke Universiteit Leuven, Leuven, Belgie (Belgium); Physikalisches Institut der Albert-Ludwigs-Universitaet, Freiburg im Breisgau (Germany); Matos Filho, Ruynet L. de [Instituto de Fisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil)

    2012-07-01

    In this work we discuss the role of decoherence in quantum information protocols. Particularly, we study quantum teleportation in the realistic situation where not only the transmission channel is imperfect, but also the preparation of the state to be teleported. The optimal protocol to be applied in this situation is found and we show that taking into account the input state noise leads to sizable gains in teleportation fidelity. It is then evident that sources of noise in the input state preparation must be taken into consideration in order to maximize the teleportation fidelity. The optimization of the protocol can be defined for specific experimental realizations and accessible operations, giving a trade-off between protocol quality and experiment complexity.

  8. An electromagnetic field measurement protocol for monitoring power lines

    International Nuclear Information System (INIS)

    Lubritto, C.; Iavazzo, A.; D'Onofrio, A.; Palmieri, A.; Sabbarese, C.; Terrasi, F.

    2002-01-01

    In the actions aiming to prevent risks related to the exposure to Low Frequencies Non Ionising electromagnetic Radiations (ELF-NIR), always arises the need to perform measurements in order to assess the field level existing in the considered sites. As a matter of fact very often it turns out difficult to predict, on the base of calculations, with sufficient approximation the field levels, due to extended variability of environmental conditions (e.g. coexistence of several sources, ground and building conformation, etc..). The measurement procedures must follow a methodology that could allow to minimise the interferences with the measurement set-up and the systematic and accidental errors. Risks for the operator and damages to the instrument should also be taken into account. One of the goal set for this research program was then the definition of the measurement protocol for electromagnetic field generated by low frequency non ionising radiation sources. In particular sources like power lines will be considered in order to validate the protocol by means of in-field measurements

  9. Characterizing the source of radon indoors

    International Nuclear Information System (INIS)

    Nero, A.V.; Nazaroff, W.W.

    1983-09-01

    Average indoor radon concentrations range over more than two orders of magnitude, largely because of variability in the rate at which radon enters from building materials, soil, and water supplies. Determining the indoor source magnitude requires knowledge of the generation of radon in source materials, its movement within materials by diffusion and convection, and the means of its entry into buildings. This paper reviews the state of understanding of indoor radon sources and transport. Our understanding of generation rates in and movement through building materials is relatively complete and indicates that, except for materials with unusually high radionuclide contents, these sources can account for observed indoor radon concentrations only at the low end of the range observed. Our understanding of how radon enters buildings from surrounding soil is poorer, however recent experimental and theoretical studies suggest that soil may be the predominant source in many cases where the indoor radon concentration is high. 73 references, 3 figures, 1 table

  10. Characterization of selenium in ambient aerosols and primary emission sources.

    Science.gov (United States)

    De Santiago, Arlette; Longo, Amelia F; Ingall, Ellery D; Diaz, Julia M; King, Laura E; Lai, Barry; Weber, Rodney J; Russell, Armistead G; Oakes, Michelle

    2014-08-19

    Atmospheric selenium (Se) in aerosols was investigated using X-ray absorption near-edge structure (XANES) spectroscopy and X-ray fluorescence (XRF) microscopy. These techniques were used to determine the oxidation state and elemental associations of Se in common primary emission sources and ambient aerosols collected from the greater Atlanta area. In the majority of ambient aerosol and primary emission source samples, the spectroscopic patterns as well as the absence of elemental correlations suggest Se is in an elemental, organic, or oxide form. XRF microscopy revealed numerous Se-rich particles, or hotspots, accounting on average for ∼16% of the total Se in ambient aerosols. Hotspots contained primarily Se(0)/Se(-II). However, larger, bulk spectroscopic characterizations revealed Se(IV) as the dominant oxidation state in ambient aerosol, followed by Se(0)/Se(-II) and Se(VI). Se(IV) was the only observed oxidation state in gasoline, diesel, and coal fly ash, while biomass burning contained a combination of Se(0)/Se(-II) and Se(IV). Although the majority of Se in aerosols was in the most toxic form, the Se concentration is well below the California Environmental Protection Agency chronic exposure limit (∼20000 ng/m(3)).

  11. Control room envelope unfiltered air inleakage test protocols

    International Nuclear Information System (INIS)

    Lagus, P.L.; Grot, R.A.

    1997-01-01

    In 1983, the Advisory Committee on Reactor Safeguards (ACRS) recommended that the US NRC develop a control room HVAC performance testing protocol. To date no such protocol has been forthcoming. Beginning in mid-1994, an effort was funded by NRC under a Small Business Innovation Research (SBIR) grant to develop several simplified test protocols based on the principles of tracer gas testing in order to measure the total unfiltered inleakage entering a CRE during emergency mode operation of the control room ventilation system. These would allow accurate assessment of unfiltered air inleakage as required in SRP 6.4. The continuing lack of a standard protocol is unfortunate since one of the significant parameters required to calculate operator dose is the amount of unfiltered air inleakage into the control room. Often it is assumed that, if the Control Room Envelope (CRE) is maintained at +1/8 in. w.g. differential pressure relative to the surroundings, no significant unfiltered inleakage can occur it is further assumed that inleakage due to door openings is the only source of unfiltered air. 23 refs., 13 figs., 2 tabs

  12. Control room envelope unfiltered air inleakage test protocols

    Energy Technology Data Exchange (ETDEWEB)

    Lagus, P.L. [Lagus Applied Technology, San Diego, CA (United States); Grot, R.A. [Lagus Applied Technology, Olney, MD (United States)

    1997-08-01

    In 1983, the Advisory Committee on Reactor Safeguards (ACRS) recommended that the US NRC develop a control room HVAC performance testing protocol. To date no such protocol has been forthcoming. Beginning in mid-1994, an effort was funded by NRC under a Small Business Innovation Research (SBIR) grant to develop several simplified test protocols based on the principles of tracer gas testing in order to measure the total unfiltered inleakage entering a CRE during emergency mode operation of the control room ventilation system. These would allow accurate assessment of unfiltered air inleakage as required in SRP 6.4. The continuing lack of a standard protocol is unfortunate since one of the significant parameters required to calculate operator dose is the amount of unfiltered air inleakage into the control room. Often it is assumed that, if the Control Room Envelope (CRE) is maintained at +1/8 in. w.g. differential pressure relative to the surroundings, no significant unfiltered inleakage can occur it is further assumed that inleakage due to door openings is the only source of unfiltered air. 23 refs., 13 figs., 2 tabs.

  13. Survey of residential magnetic field sources

    International Nuclear Information System (INIS)

    Zaffanella, L.E.

    1992-09-01

    A nationwide survey of 1000 residences is underway to determine the sources and characteristics of magnetic fields in the home. This report describes the goals, statistical sampling methods, measurement protocols, and experiences in measuring the first 707 residences of the survey. Some preliminary analysis of the data is also included. Investigators designed a sampling method to randomly select the participating utilities as well as the residential customers for the study. As a first step in the project, 18 utility employee residences were chosen to validate a relatively simple measurement protocol against the results of a more complete and intrusive method. Using the less intrusive measurement protocol, researchers worked closely with representatives from EPRI member utilities to enter customer residences and measure the magnetic fields found there. Magnetic field data were collected in different locations inside and around the residences. Twenty-four-hour recorders were left in the homes overnight. Tests showed that the simplified measurement protocol is adequate for achieving the goals of the study. Methods were developed for analyzing the field caused by a residence's ground current, the lateral field profiles of field lines, and the field measured around the periphery of the residences. Methods of residential source detection were developed that allow identification of sources such as ground connections at an electrical subpanel, two-wire multiple-way switches, and underground or overhead net currents exiting the periphery of a residence

  14. Characterization and optimization of laser-driven electron and photon sources in keV and MeV energy ranges

    International Nuclear Information System (INIS)

    Bonnet, Thomas

    2013-01-01

    This work takes place in the framework of the characterization and the optimization of laser-driven electron and photon sources. With the goal of using these sources for nuclear physics experiments, we focused on 2 energy ranges: one around a few MeV and the other around a few tens of keV. The first part of this work is thus dedicated to the study of detectors routinely used for the characterization of laser-driven particle sources: Imaging Plates. A model has been developed and is fitted to experimental data. Response functions to electrons, photons, protons and alpha particles are established for SR, MS and TR Fuji Imaging Plates for energies ranging from a few keV to several MeV. The second part of this work present a study of ultrashort and intense electron and photon sources produced in the interaction of a laser with a solid or liquid target. An experiment was conducted at the ELFIE facility at LULI where beams of electrons and photons were accelerated up to several MeV. Energy and angular distributions of the electron and photons beams were characterized. The sources were optimized by varying the spatial extension of the plasma at both the front and the back end of the initial target position. In the optimal configuration of the laser-plasma coupling, more than 1011 electrons were accelerated. In the case of liquid target, a photon source was produced at a high repetition rate on an energy range of tens of keV by the interaction of the AURORE Laser at CELIA (10 16 W.cm -2 ) and a melted gallium target. It was shown that both the mean energy and the photon number can be increased by creating gallium jets at the surface of the liquid target with a pre-pulse. A physical interpretation supported by numerical simulations is proposed. (author)

  15. Comprehensive low-dose imaging of carotid and coronary arteries with a single-injection dual-source CT angiography protocol

    International Nuclear Information System (INIS)

    Tognolini, A.; Arellano, C.S.; Marfori, W.; Heidari, G.; Sayre, J.W.; Krishnam, M.S.; Ruehm, S.G.

    2014-01-01

    Aim: To assess the feasibility of a fast single-bolus combined carotid and coronary computed tomography angiography (CTA) protocol in asymptomatic patients. Materials and methods: Thirty-three consecutive patients (18 women and 15 men) with a median age of 61 ± 14 years old (range 37–87 years) with known or suspected atherosclerotic disease were enrolled in this prospective study. A single breath-hold, single biphasic injection protocol (50 ml at 3 ml/s, 50 ml at 5 ml/s, 50 ml saline flush at 5 ml/s) was used for combined CTA imaging of the supra-aortic (SAA) and coronary arteries (CA) on a 64-slice dual-source CT (DSCT) machine. Helical CTA acquisition of the SAA was followed by prospective electrocardiography (ECG)-triggered coronary CTA. Subjective (four-point scale) image quality and objective signal-to-noise (SNR) and contrast-to-noise (CNR) measurements were performed. Vascular disease was graded on a four-point scale (grade 1: absent; grade 2: mild, grade 3: moderate; grade 4: severe). The radiation dose was recorded for each patient. Results: The average enhancement and subjective quality score of SAA and CA segments were 396 HU/358 HU and 1.2 ± 0.3/1.72 ± 0.4, respectively. The SNR was 27.1 ± 1.7 in the SAA and 21.6 ± 1.6 in the CA (p < 0.0001). The CNR was 18.1 ± 1.2 and 15.9 ± 1.8, respectively (p = 0.4). Four percent of SAA and 14% of CA segments (mostly due to peri-venous streak artefacts and small calibre, respectively) produced non-diagnostic images. SAA findings were as follows: 26/33 (79%) patients showed no disease and 6/33 (18%) had grade 2 and 1/33 (3%) had grade 3 disease. CA findings were as follows: 25/33 (76%) showed no disease and 6/33 (18%) patients had grade 2 and 2/33 (6%) had grade 3 disease. Five patients had disease in both districts. The average radiation dose for the combined CTA angiogram was 4.3 ± 0.6 mSv. Conclusion: A fast, low-dose combined DSCT angiography protocol appears technically feasible for imaging carotid and

  16. Impossibility criterion for obtaining pure entangled states from mixed states by purifying protocols

    International Nuclear Information System (INIS)

    Chen Pingxing; Liang Linmei; Li Chengzu; Huang Mingqiu

    2002-01-01

    Purifying noisy entanglement is a protocol that can increase the entanglement of a mixed state (as a source) at the expense of the entanglement of others (such as an ancilla) by collective measurement. A protocol with which one can get a pure entangled state from a mixed state is defined as purifying mixed states. We address a basic question: can one get a pure entangled state from a mixed state? We give a necessary and sufficient condition of purifying a mixed state by fit local operations and classical communication and show that for a class of source states and ancilla states in arbitrary bipartite systems purifying mixed states is impossible by finite rounds of purifying protocols. For 2x2 systems, it is proved that arbitrary states cannot be purified by individual measurement. The possible application and meaning of the conclusion are discussed

  17. Production and characterization of cellulolytic enzymes from Trichoderma reesei grown on various carbon sources

    Energy Technology Data Exchange (ETDEWEB)

    Warzywoda, Michel; Labre, Elisabeth; Pourquie, Jacques [Institut Francais du Petrole (IFP), 92 - Rueil-Malmaison (France)

    1992-01-01

    Ethanol production from lignocellulosics is considered, using a process in which biomass is first pretreated by steam explosion, yielding freely water-extractible pentoses and a cellulose-rich residue which can be further hydrolyzed by cellulases into glucose to be fermented into ethanol. Results that are reported show that both the pentose extracts and the glucose-rich hydrolyzates can be used as carbon sources for cellulase production by Trichoderma reesei. When compared with lactose as the main carbon source, pentose extracts support lower but satisfactory protein productions which are characterized by an increase in hemicellulolytic activities, which significantly improves the saccharifying potential of these enzyme preparations. (author).

  18. Geochemical characterization of critical dust source regions in the American West

    Science.gov (United States)

    Aarons, Sarah M.; Blakowski, Molly A.; Aciego, Sarah M.; Stevenson, Emily I.; Sims, Kenneth W. W.; Scott, Sean R.; Aarons, Charles

    2017-10-01

    The generation, transport, and deposition of mineral dust are detectable in paleoclimate records from land, ocean, and ice, providing valuable insight into earth surface conditions and cycles on a range of timescales. Dust deposited in marine and terrestrial ecosystems can provide critical nutrients to nutrient-limited ecosystems, and variations in dust provenance can indicate changes in dust production, sources and transport pathways as a function of climate variability and land use change. Thus, temporal changes in locations of dust source areas and transport pathways have implications for understanding interactions between mineral dust, global climate, and biogeochemical cycles. This work characterizes dust from areas in the American West known for dust events and/or affected by increasing human settlement and livestock grazing during the last 150 years. Dust generation and uplift from these dust source areas depends on climate and land use practices, and the relative contribution of dust has likely changed since the expansion of industrialization and agriculture into the western United States. We present elemental and isotopic analysis of 28 potential dust source area samples analyzed using Thermal Ionization Mass Spectrometry (TIMS) for 87Sr/86Sr and 143Nd/144Nd composition and Multi-Collector Inductively Coupled Plasma Mass Spectrometer (MC-ICPMS) for 176Hf/177Hf composition, and ICPMS for major and trace element concentrations. We find significant variability in the Sr, Nd, and Hf isotope compositions of potential source areas of dust throughout western North America, ranging from 87Sr/86Sr = 0.703699 to 0.740236, εNd = -26.6 to 2.4, and εHf = -21.7 to -0.1. We also report differences in the trace metal and phosphorus concentrations in the geologic provinces sampled. This research provides an important resource for the geochemical tracing of dust sources and sinks in western North America, and will aid in modeling the biogeochemical impacts of increased

  19. Comparison and uncertainty evaluation of different calibration protocols and ionization chambers for low-energy surface brachytherapy dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026 (Spain); Vijande, J. [Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, Burjassot 46100, Spain and Instituto de Física Corpuscular (UV-CSIC), Paterna 46980 (Spain); García-Martínez, T. [Radiation Oncology Department, Hospital La Ribera, Alzira 46600 (Spain); Niatsetski, Y.; Nauta, G.; Schuurman, J. [Elekta Brachytherapy, Veenendaal 3905 TH (Netherlands); Ouhib, Z. [Radiation Oncology Department, Lynn Regional Cancer Center, Boca Raton Community Hospital, Boca Raton, Florida 33486 (United States); Ballester, F. [Department of Atomic, Molecular, and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Perez-Calatayud, J. [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026, Spain and Department of Radiotherapy, Clínica Benidorm, Benidorm 03501 (Spain)

    2015-08-15

    Purpose: A surface electronic brachytherapy (EBT) device is in fact an x-ray source collimated with specific applicators. Low-energy (<100 kVp) x-ray beam dosimetry faces several challenges that need to be addressed. A number of calibration protocols have been published for x-ray beam dosimetry. The media in which measurements are performed are the fundamental difference between them. The aim of this study was to evaluate the surface dose rate of a low-energy x-ray source with small field applicators using different calibration standards and different small-volume ionization chambers, comparing the values and uncertainties of each methodology. Methods: The surface dose rate of the EBT unit Esteya (Elekta Brachytherapy, The Netherlands), a 69.5 kVp x-ray source with applicators of 10, 15, 20, 25, and 30 mm diameter, was evaluated using the AAPM TG-61 (based on air kerma) and International Atomic Energy Agency (IAEA) TRS-398 (based on absorbed dose to water) dosimetry protocols for low-energy photon beams. A plane parallel T34013 ionization chamber (PTW Freiburg, Germany) calibrated in terms of both absorbed dose to water and air kerma was used to compare the two dosimetry protocols. Another PTW chamber of the same model was used to evaluate the reproducibility between these chambers. Measurements were also performed with two different Exradin A20 (Standard Imaging, Inc., Middleton, WI) chambers calibrated in terms of air kerma. Results: Differences between surface dose rates measured in air and in water using the T34013 chamber range from 1.6% to 3.3%. No field size dependence has been observed. Differences are below 3.7% when measurements with the A20 and the T34013 chambers calibrated in air are compared. Estimated uncertainty (with coverage factor k = 1) for the T34013 chamber calibrated in water is 2.2%–2.4%, whereas it increases to 2.5% and 2.7% for the A20 and T34013 chambers calibrated in air, respectively. The output factors, measured with the PTW chambers

  20. Characterize kinematic rupture history of large earthquakes with Multiple Haskell sources

    Science.gov (United States)

    Jia, Z.; Zhan, Z.

    2017-12-01

    Earthquakes are often regarded as continuous rupture along a single fault, but the occurrence of complex large events involving multiple faults and dynamic triggering challenges this view. Such rupture complexities cause difficulties in existing finite fault inversion algorithms, because they rely on specific parameterizations and regularizations to obtain physically meaningful solutions. Furthermore, it is difficult to assess reliability and uncertainty of obtained rupture models. Here we develop a Multi-Haskell Source (MHS) method to estimate rupture process of large earthquakes as a series of sub-events of varying location, timing and directivity. Each sub-event is characterized by a Haskell rupture model with uniform dislocation and constant unilateral rupture velocity. This flexible yet simple source parameterization allows us to constrain first-order rupture complexity of large earthquakes robustly. Additionally, relatively few parameters in the inverse problem yields improved uncertainty analysis based on Markov chain Monte Carlo sampling in a Bayesian framework. Synthetic tests and application of MHS method on real earthquakes show that our method can capture major features of large earthquake rupture process, and provide information for more detailed rupture history analysis.

  1. BioBlocks: Programming Protocols in Biology Made Easier.

    Science.gov (United States)

    Gupta, Vishal; Irimia, Jesús; Pau, Iván; Rodríguez-Patón, Alfonso

    2017-07-21

    The methods to execute biological experiments are evolving. Affordable fluid handling robots and on-demand biology enterprises are making automating entire experiments a reality. Automation offers the benefit of high-throughput experimentation, rapid prototyping, and improved reproducibility of results. However, learning to automate and codify experiments is a difficult task as it requires programming expertise. Here, we present a web-based visual development environment called BioBlocks for describing experimental protocols in biology. It is based on Google's Blockly and Scratch, and requires little or no experience in computer programming to automate the execution of experiments. The experiments can be specified, saved, modified, and shared between multiple users in an easy manner. BioBlocks is open-source and can be customized to execute protocols on local robotic platforms or remotely, that is, in the cloud. It aims to serve as a de facto open standard for programming protocols in Biology.

  2. Using generalizability theory to develop clinical assessment protocols.

    Science.gov (United States)

    Preuss, Richard A

    2013-04-01

    Clinical assessment protocols must produce data that are reliable, with a clinically attainable minimal detectable change (MDC). In a reliability study, generalizability theory has 2 advantages over classical test theory. These advantages provide information that allows assessment protocols to be adjusted to match individual patient profiles. First, generalizability theory allows the user to simultaneously consider multiple sources of measurement error variance (facets). Second, it allows the user to generalize the findings of the main study across the different study facets and to recalculate the reliability and MDC based on different combinations of facet conditions. In doing so, clinical assessment protocols can be chosen based on minimizing the number of measures that must be taken to achieve a realistic MDC, using repeated measures to minimize the MDC, or simply based on the combination that best allows the clinician to monitor an individual patient's progress over a specified period of time.

  3. Experimental Evaluation of Simulation Abstractions for Wireless Sensor Network MAC Protocols

    Directory of Open Access Journals (Sweden)

    G. P. Halkes

    2010-01-01

    Full Text Available The evaluation of MAC protocols for Wireless Sensor Networks (WSNs is often performed through simulation. These simulations necessarily abstract away from reality in many ways. However, the impact of these abstractions on the results of the simulations has received only limited attention. Moreover, many studies on the accuracy of simulation have studied either the physical layer and per link effects or routing protocol effects. To the best of our knowledge, no other work has focused on the study of the simulation abstractions with respect to MAC protocol performance. In this paper, we present the results of an experimental study of two often used abstractions in the simulation of WSN MAC protocols. We show that a simple SNR-based reception model can provide quite accurate results for metrics commonly used to evaluate MAC protocols. Furthermore, we provide an analysis of what the main sources of deviation are and thereby how the simulations can be improved to provide even better results.

  4. Development and characterization of a laser-based hard x-ray source

    International Nuclear Information System (INIS)

    Tillman, C.

    1996-11-01

    A laser-produced plasma was generated by focusing 100 fs laser pulses, with an energy of 150 mJ, onto metal targets. The laser intensity was expected to reach 10 17 W/cm -2 . Radiation was emitted from the created plasma, with photon energies up to the MeV region. The laser-based X-ray source was optimized, with the purpose of making it a realistic source of hard X-rays (>10 keV). Dedicated equipment was developed for efficient generation and utilization of the hard X-rays. The X-ray source was characterized with respect to its spatial extent and the X-ray yield. Measurements were made of the spectral distribution, by the use of single-photon-counting detectors in different geometries, crystal spectrometers and dose measurements in combination with absorption filters. Ablation of the target material in the laser produced plasma was investigated. Imaging applications have been demonstrated, including ultrafast (picosecond) X-ray imaging, magnification imaging of up to x80, differential imaging in the spectral domain, and imaging of various biological and technical objects. The biological response of ultra-intense X-ray pulses was assessed in cell-culture exposures. The results indicate that the biological response from ultra-intense X-ray exposures is similar to the response with conventional X-ray tubes. 82 refs., 14 figs

  5. Development and characterization of a laser-based hard x-ray source

    Energy Technology Data Exchange (ETDEWEB)

    Tillman, C.

    1996-11-01

    A laser-produced plasma was generated by focusing 100 fs laser pulses, with an energy of 150 mJ, onto metal targets. The laser intensity was expected to reach 10{sup 17} W/cm{sup -2}. Radiation was emitted from the created plasma, with photon energies up to the MeV region. The laser-based X-ray source was optimized, with the purpose of making it a realistic source of hard X-rays (>10 keV). Dedicated equipment was developed for efficient generation and utilization of the hard X-rays. The X-ray source was characterized with respect to its spatial extent and the X-ray yield. Measurements were made of the spectral distribution, by the use of single-photon-counting detectors in different geometries, crystal spectrometers and dose measurements in combination with absorption filters. Ablation of the target material in the laser produced plasma was investigated. Imaging applications have been demonstrated, including ultrafast (picosecond) X-ray imaging, magnification imaging of up to x80, differential imaging in the spectral domain, and imaging of various biological and technical objects. The biological response of ultra-intense X-ray pulses was assessed in cell-culture exposures. The results indicate that the biological response from ultra-intense X-ray exposures is similar to the response with conventional X-ray tubes. 82 refs., 14 figs.

  6. OpenADR Open Source Toolkit: Developing Open Source Software for the Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    McParland, Charles

    2011-02-01

    Demand response (DR) is becoming an increasingly important part of power grid planning and operation. The advent of the Smart Grid, which mandates its use, further motivates selection and development of suitable software protocols to enable DR functionality. The OpenADR protocol has been developed and is being standardized to serve this goal. We believe that the development of a distributable, open source implementation of OpenADR will benefit this effort and motivate critical evaluation of its capabilities, by the wider community, for providing wide-scale DR services

  7. Evaluation and characterization of General Purpose Heat Source girth welds for the Cassini mission

    International Nuclear Information System (INIS)

    Lynch, C.M.; Moniz, P.F.; Reimus, M.A.H.

    1998-01-01

    General Purpose Heat Sources (GPHSs) are components of Radioisotopic thermoelectric Generators (RTGs) which provide electric power for deep space missions. Each GPHS consists of a 238 Pu oxide ceramic pellet encapsulated in a welded iridium alloy shell which forms a protective barrier against the release of plutonia in the unlikely event of a launch-pad failure or reentry incident. GPHS fueled clad girth weld flaw detection was paramount to ensuring this safety function, and was accomplished using both destructive and non-destructive evaluation techniques. The first girth weld produced from each welding campaign was metallographically examined for flaws such as incomplete weld penetration, cracks, or porosity which would render a GPHS unacceptable for flight applications. After an acceptable example weld was produced, the subsequently welded heat sources were evaluated non-destructively for flaws using ultrasonic immersion testing. Selected heat sources which failed ultrasonic testing would be radiographed, and/or, destructively evaluated to further characterize and document anomalous indications. Metallography was also performed on impacted heat sources to determine the condition of the welds

  8. Integrated spatiotemporal characterization of dust sources and outbreaks in Central and East Asia

    Science.gov (United States)

    Darmenova, Kremena T.

    The potential of atmospheric dust aerosols to modify the Earth's environment and climate has been recognized for some time. However, predicting the diverse impact of dust has several significant challenges. One is to quantify the complex spatial and temporal variability of dust burden in the atmosphere. Another is to quantify the fraction of dust originating from human-made sources. This thesis focuses on the spatiotemporal characterization of sources and dust outbreaks in Central and East Asia by integrating ground-based data, satellite multisensor observations, and modeling. A new regional dust modeling system capable of operating over a span of scales was developed. The modeling system consists of a dust module DuMo, which incorporates several dust emission schemes of different complexity, and the PSU/NCAR mesoscale model MM5, which offers a variety of physical parameterizations and flexible nesting capability. The modeling system was used to perform for the first time a comprehensive study of the timing, duration, and intensity of individual dust events in Central and East Asia. Determining the uncertainties caused by the choice of model physics, especially the boundary layer parameterization, and the dust production scheme was the focus of our study. Implications to assessments of the anthropogenic dust fraction in these regions were also addressed. Focusing on Spring 2001, an analysis of routine surface meteorological observations and satellite multi-sensor data was carried out in conjunction with modeling to determine the extent to which integrated data set can be used to characterize the spatiotemporal distribution of dust plumes at a range of temporal scales, addressing the active dust sources in China and Mongolia, mid-range transport and trans-Pacific, long-range transport of dust outbreaks on a case-by-case basis. This work demonstrates that adequate and consistent characterization of individual dust events is central to establishing a reliable

  9. A practical two-way system of quantum key distribution with untrusted source

    International Nuclear Information System (INIS)

    Chen Ming-Juan; Liu Xiang

    2011-01-01

    The most severe problem of a two-way 'plug-and-play' (p and p) quantum key distribution system is that the source can be controlled by the eavesdropper. This kind of source is defined as an “untrusted source . This paper discusses the effects of the fluctuation of internal transmittance on the final key generation rate and the transmission distance. The security of the standard BB84 protocol, one-decoy state protocol, and weak+vacuum decoy state protocol, with untrusted sources and the fluctuation of internal transmittance are studied. It is shown that the one-decoy state is sensitive to the statistical fluctuation but weak+vacuum decoy state is only slightly affected by the fluctuation. It is also shown that both the maximum secure transmission distance and final key generation rate are reduced when Alice's laboratory transmittance fluctuation is considered. (general)

  10. Simple Models for the Performance Evaluation of a Class of Two-Hop Relay Protocols

    NARCIS (Netherlands)

    Al Hanbali, Ahmad; Kherani, Arzad A.; Nain, Philippe

    2007-01-01

    We evaluate the performance of a class of two-hop relay protocols for mobile ad hoc networks. The interest is on the multicopy two-hop relay (MTR) protocol, where the source may generate multiple copies of a packet and use relay nodes to deliver the packet (or a copy) to its destination, and on the

  11. Simple models for the performance evaluation of a class of two-hop relay protocols

    NARCIS (Netherlands)

    Al Hanbali, A.; Kherani, A.A.; Nain, P.; Akyildiz, I.F.; Sivakumar, R.; Ekici, E.; Cavalcante de Oliveira, J.; McNair, J.

    2007-01-01

    We evaluate the performance of a class of two-hop relay protocols for mobile ad hoc networks. The interest is on the multicopy two-hop relay (MTR) protocol, where the source may generate multiple copies of a packet and use relay nodes to deliver the packet (or a copy) to its destination, and on the

  12. Automated Planning Enables Complex Protocols on Liquid-Handling Robots.

    Science.gov (United States)

    Whitehead, Ellis; Rudolf, Fabian; Kaltenbach, Hans-Michael; Stelling, Jörg

    2018-03-16

    Robotic automation in synthetic biology is especially relevant for liquid handling to facilitate complex experiments. However, research tasks that are not highly standardized are still rarely automated in practice. Two main reasons for this are the substantial investments required to translate molecular biological protocols into robot programs, and the fact that the resulting programs are often too specific to be easily reused and shared. Recent developments of standardized protocols and dedicated programming languages for liquid-handling operations addressed some aspects of ease-of-use and portability of protocols. However, either they focus on simplicity, at the expense of enabling complex protocols, or they entail detailed programming, with corresponding skills and efforts required from the users. To reconcile these trade-offs, we developed Roboliq, a software system that uses artificial intelligence (AI) methods to integrate (i) generic formal, yet intuitive, protocol descriptions, (ii) complete, but usually hidden, programming capabilities, and (iii) user-system interactions to automatically generate executable, optimized robot programs. Roboliq also enables high-level specifications of complex tasks with conditional execution. To demonstrate the system's benefits for experiments that are difficult to perform manually because of their complexity, duration, or time-critical nature, we present three proof-of-principle applications for the reproducible, quantitative characterization of GFP variants.

  13. Final report of the IAEA advisory group meeting on accelerator-based nuclear analytical techniques for characterization and source identification of aerosol particles

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    The field of aerosol characterization and source identification covers a wide range of scientific and technical activities in many institutions, in both developed and developing countries. This field includes research and applications on urban air pollution, source apportionment of suspended particulate matter, radioactive aerosol particles, organic compounds carried on particulate matter, elemental characterization of particles, and other areas. The subject of this AGM focused on the use of accelerator-based nuclear analytical techniques for determination of elemental composition of particles (by either bulk or single particle analysis) and the use of accumulated knowledge for source identification.

  14. Final report of the IAEA advisory group meeting on accelerator-based nuclear analytical techniques for characterization and source identification of aerosol particles

    International Nuclear Information System (INIS)

    1995-01-01

    The field of aerosol characterization and source identification covers a wide range of scientific and technical activities in many institutions, in both developed and developing countries. This field includes research and applications on urban air pollution, source apportionment of suspended particulate matter, radioactive aerosol particles, organic compounds carried on particulate matter, elemental characterization of particles, and other areas. The subject of this AGM focused on the use of accelerator-based nuclear analytical techniques for determination of elemental composition of particles (by either bulk or single particle analysis) and the use of accumulated knowledge for source identification

  15. Characterization of a Distributed Plasma Ionization Source (DPIS) for Ion Mobility Spectrometry and Mass Spectrometry

    International Nuclear Information System (INIS)

    Waltman, Melanie J.; Dwivedi, Prabha; Hill, Herbert; Blanchard, William C.; Ewing, Robert G.

    2008-01-01

    A recently developed atmospheric pressure ionization source, a distributed plasma ionization source (DPIS), was characterized and compared to commonly used atmospheric pressure ionization sources with both mass spectrometry and ion mobility spectrometry. The source consisted of two electrodes of different sizes separated by a thin dielectric. Application of a high RF voltage across the electrodes generated plasma in air yielding both positive and negative ions depending on the polarity of the applied potential. These reactant ions subsequently ionized the analyte vapors. The reactant ions generated were similar to those created in a conventional point-to-plane corona discharge ion source. The positive reactant ions generated by the source were mass identified as being solvated protons of general formula (H2O)nH+ with (H2O)2H+ as the most abundant reactant ion. The negative reactant ions produced were mass identified primarily as CO3-, NO3-, NO2-, O3- and O2- of various relative intensities. The predominant ion and relative ion ratios varied depending upon source construction and supporting gas flow rates. A few compounds including drugs, explosives and environmental pollutants were selected to evaluate the new ionization source. The source was operated continuously for several months and although deterioration was observed visually, the source continued to produce ions at a rate similar that of the initial conditions. The results indicated that the DPIS may have a longer operating life than a conventional corona discharge.

  16. Desensitization protocols and their outcome.

    Science.gov (United States)

    Marfo, Kwaku; Lu, Amy; Ling, Min; Akalin, Enver

    2011-04-01

    In the last decade, transplantation across previously incompatible barriers has increasingly become popular because of organ donor shortage, availability of better methods of detecting and characterizing anti-HLA antibodies, ease of diagnosis, better understanding of antibody-mediated rejection, and the availability of effective regimens. This review summarizes all manuscripts published since the first publication in 2000 on desensitized patients and discusses clinical outcomes including acute and chronic antibody-mediated rejection rate, the new agents available, kidney paired exchange programs, and the future directions in sensitized patients. There were 21 studies published between 2000 and 2010, involving 725 patients with donor-specific anti-HLA antibodies (DSAs) who underwent kidney transplantation with different desensitization protocols. All studies were single center and retrospective. The patient and graft survival were 95% and 86%, respectively, at a 2-year median follow-up. Despite acceptable short-term patient and graft survivals, acute rejection rate was 36% and acute antibody-mediated rejection rate was 28%, which is significantly higher than in nonsensitized patients. Recent studies with longer follow-up of those patients raised concerns about long-term success of desensitization protocols. The studies utilizing protocol biopsies in desensitized patients also reported higher subclinical and chronic antibody-mediated rejection. An association between the strength of DSAs determined by median fluorescence intensity values of Luminex single-antigen beads and risk of rejection was observed. Two new agents, bortezomib, a proteasome inhibitor, and eculizumab, an anti-complement C5 antibody, were recently introduced to desensitization protocols. An alternative intervention is kidney paired exchange, which should be considered first for sensitized patients. © 2011 by the American Society of Nephrology

  17. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    Science.gov (United States)

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  18. Characterization of a 137Cs standard source for calibration purposes at CRCN-NE

    International Nuclear Information System (INIS)

    Oliveira, Mercia L.; Santos, Marcus A.P. dos; Benvides, Clayton A.

    2008-01-01

    Radiation protection monitoring instruments should be calibrated by accredited calibration laboratories. To offer calibration services, a laboratory must accomplish all requirements established by the national regulatory agency. The Calibration Service of the Centro Regional de Ciencias Nucleares (CRCN-NE), Comissao Nacional de Energia Nuclear, Recife, Brazil, is trying to achieve this accreditation. In the present work, a 137 Cs standard source was characterized following the national and international recommendations and the results are presented. This source is a commercially available single source irradiator model 28-8A, manufactured by J.L. Shepherd and Associates, with initial activity of 444 GBq (05/13/03). To provide different air kerma rates, as required for the calibration of portable radiation monitors, this irradiator have a set of four lead attenuators with different thickness, providing attenuation factors equal to 2, 4, 10 and 100 times (nominally). The performed tests included: size and uniformity of the radiation standard field at calibration reference position, variation of the air kerma rate for different lead attenuators, determination of attenuation factors for each lead attenuator configuration, and determination of the radiation scattering at the calibration reference position. The results showed the usefulness of the 137 Cs standard source for the calibration of radiation protection monitoring detectors. (author)

  19. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields.

    Science.gov (United States)

    Sapozhnikov, Oleg A; Tsysar, Sergey A; Khokhlova, Vera A; Kreider, Wayne

    2015-09-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors.

  20. Chemical and isotopic methods for characterization of pollutant sources in rain water

    International Nuclear Information System (INIS)

    Verma, M.P.

    1996-01-01

    The acid rain formation is related with industrial pollution. An isotopic and chemical study of the spatial and temporary distribution of the acidity in the rain gives information about the acidity source. The predominant species in the acid rain are nitrates and sulfates. For the rain monitoring is required the determination of the anion species such as HCO 3 , Cl, SO 4 , NO 3 and p H. So it was analyzed the cations Na + , K + , Ca 2+ and Mg 2+ to determine the quality analysis. All of them species can be determined with enough accuracy, except HCO 3 by modern equipment such as, liquid chromatograph, atomic absorption, etc. The HCO 3 concentration is determined by traditional methods like acid-base titration. This work presents the fundamental concepts of the titration method for samples with low alkalinity (carbonic species), for rain water. There is presented a general overview over the isotopic methods for the characterization of the origin of pollutant sources in the rain. (Author)

  1. Protocol Implementation Generator

    DEFF Research Database (Denmark)

    Carvalho Quaresma, Jose Nuno; Probst, Christian W.

    2010-01-01

    Users expect communication systems to guarantee, amongst others, privacy and integrity of their data. These can be ensured by using well-established protocols; the best protocol, however, is useless if not all parties involved in a communication have a correct implementation of the protocol and a...... Generator framework based on the LySatool and a translator from the LySa language into C or Java....... necessary tools. In this paper, we present the Protocol Implementation Generator (PiG), a framework that can be used to add protocol generation to protocol negotiation, or to easily share and implement new protocols throughout a network. PiG enables the sharing, verification, and translation...

  2. A lightweight neighbor-info-based routing protocol for no-base-station taxi-call system.

    Science.gov (United States)

    Zhu, Xudong; Wang, Jinhang; Chen, Yunchao

    2014-01-01

    Since the quick topology change and short connection duration, the VANET has had unstable routing and wireless signal quality. This paper proposes a kind of lightweight routing protocol-LNIB for call system without base station, which is applicable to the urban taxis. LNIB maintains and predicts neighbor information dynamically, thus finding the reliable path between the source and the target. This paper describes the protocol in detail and evaluates the performance of this protocol by simulating under different nodes density and speed. The result of evaluation shows that the performance of LNIB is better than AODV which is a classic protocol in taxi-call scene.

  3. Palynofacies characterization for hydrocarbon source rock ...

    Indian Academy of Sciences (India)

    source rock potential of the Subathu Formation in the area. Petroleum geologists are well aware of the fact that the dispersed organic matter derived either from marine or non-marine sediments on reach- ing its maturation level over extended period of time contributes as source material for the produc- tion of hydrocarbons.

  4. Characterization and variability of the main oceanic sources of moisture

    Science.gov (United States)

    Castillo Rodriguez, R.; Nieto, R.; Gimeno, L.; Drumond, A.

    2012-04-01

    Transport of water vapor in the atmosphere from regions of net evaporation to regions of net precipitation is an important part of the hydrological cycle. The aim of this study is to track variations of atmospheric moisture along 10-days trajectories of air masses to identify where continental regions are affected by precipitation originating from specific oceanic regions. The proceeding was based on the method developed by Stohl and James 2004, 2005, which used the Lagrangian particle dispersion model FLEXPART v8.0 and reanalysis data ERA-40 from the European Centre for Medium-Range Weather Forecast (ECMWF). These source regions, selecting according to the largest values of divergence of the vertically integrated moisture flux are: India, North and South Pacific, North and South Atlantic oceans, Mexico-Caribbean, the Mediterranean, the Arabian, the Coral and the Red seas, as well as the Agulhas (in the waters surrounding South Africa) and the Zanzibar Current regions. And they were defined based on the threshold of 750 mm/yr. We investigated the moisture sinks associated with each one of these evaporative sources for a period of 21 years (1980-2000) in a seasonal scale using correlations and the statistical mean. In addition, we characterized the influence of the El Niño-Southern Oscillation over the transport of moisture from the source regions selected with the composites technique from the month of june to the month of may over the years 1984-1985, 1988-1989, 1995-1996, 1998-1999, 1999-2000 in the Niña phase and 1982-1983, 1986-1987, 1991-1992, 1994-1995, 1997-1998 in the Niño phase.

  5. Past and Future of the Kyoto Protocol. Final report

    International Nuclear Information System (INIS)

    Wijen, F.; Zoeteman, K.

    2004-01-01

    The present report reflects findings from a study on the realization of and prospects for the Kyoto Protocol. The purpose of the study was (1) to obtain insights into the factors that enabled the realization of the Kyoto Protocol, in particular the interactions among major parties involved; (2) to assess the future opportunities and threats of the Kyoto Protocol, in particular against the backdrop of an increasingly globalised world. The study was conducted from February up to December 2003 by (a) reviewing the literature, especially publications on the negotiation history of the Kyoto process, the social interactions enabling the realization of the Protocol, analyses of strengths and weaknesses, and future climate regimes; (b) conducting a series of interviews with representatives from government, academia, non-governmental organisations, and business, who have been - directly or indirectly - involved in the Kyoto process; (c) internal discussions,brainstorming and analysing the Protocol's strengths and weaknesses, possible future scenarios (including policy options), and the management of a possible failure of the Kyoto Protocol. The present report reflects and integrates the different sources. The first section deals with the past and the present. It discusses how the Kyoto Protocol could be realized despite the divergent interests, reflects on its architecture, and analyses major strengths and weaknesses. In the second section, we present possible future scenarios. We explore how different combinations of domestic and international commitment provide possible realities that national government may face when crafting climate policy. The third section provides an in-depth analysis of the possible event that the Kyoto Protocol fails. We discuss its definition and policy implications. The final section is reserved for overall conclusions and policy recommendations

  6. Characterization techniques for the high-brightness particle beams of the Advanced Photon Source (APS)

    International Nuclear Information System (INIS)

    Lumpkin, A.H.

    1993-01-01

    The Advanced Photon Source (APS) will be a third-generation synchrotron radiation (SR) user facility in the hard x-ray regime (10--100 keV). The design objectives for the 7-GeV storage ring include a positron beam natural emittance of 8 x 10 -9 m-rad at an average current of 100 mA. Proposed methods for measuring the transverse and longitudinal profiles will be described. Additionally, a research and development effort using an rf gun as a low-emittance source of electrons for injection into the 200- to 650-MeV linac subsystem is underway. This latter system is projected to produce electron beams with a normalized, rms emittance of ∼2 π mm-mrad at peak currents of near one hundred amps. This interesting characterization problem will also be briefly discussed. The combination of both source types within one laboratory facility will stimulate the development of diagnostic techniques in these parameter spaces

  7. Development of a method for the characterization and operation of UV-LED for water treatment.

    Science.gov (United States)

    Kheyrandish, Ataollah; Mohseni, Madjid; Taghipour, Fariborz

    2017-10-01

    Tremendous improvements in semiconductor technology have made ultraviolet light-emitting diodes (UV-LEDs) a viable alternative to conventional UV sources for water treatment. A robust and validated experimental protocol for studying the kinetics of microorganism inactivation is key to the further development of UV-LEDs for water treatment. This study proposes a protocol to operate UV-LEDs and control their output as a polychromatic radiation source. In order to systematically develop this protocol, the results of spectral power distribution, radiation profile, and radiant power measurements of a variety of UV-LEDs are presented. A wide range of UV-LEDs was selected for this study, covering various UVA, UVB, and UVC wavelengths, viewing angles from 3.5° to 135°, and a variety of output powers. The effects of operational conditions and measurement techniques were investigated on these UV-LEDs using a specially designed and fabricated setup. Operating conditions, such as the UV-LED electrical current and solder temperature, were found to significantly affect the power and peak wavelength output. The measurement techniques and equipment, including the detector size, detector distance from the UV-LED, and potential reflection from the environment, were shown to influence the results for many of the UV-LEDs. The results obtained from these studies were analyzed and applied to the development of a protocol for UV-LED characterization. This protocol is presented as a guideline that allows the operation and control of UV-LEDs in any structure, as well as accurately measuring the UV-LED output. Such information is essential for performing a reliable UV-LED assessment for the inactivation of microorganisms and for obtaining precise kinetic data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Vertical Protocol Composition

    DEFF Research Database (Denmark)

    Groß, Thomas; Mödersheim, Sebastian Alexander

    2011-01-01

    The security of key exchange and secure channel protocols, such as TLS, has been studied intensively. However, only few works have considered what happens when the established keys are actually used—to run some protocol securely over the established “channel”. We call this a vertical protocol.......e., that the combination cannot introduce attacks that the individual protocols in isolation do not have. In this work, we prove a composability result in the symbolic model that allows for arbitrary vertical composition (including self-composition). It holds for protocols from any suite of channel and application...

  9. Characterization of the Shielded Neutron Source at Triangle Universities Nuclear Laboratory

    Science.gov (United States)

    Hobson, Chad; Finch, Sean; Howell, Calvin; Malone, Ron; Tornow, Wernew

    2016-09-01

    In 2015, Triangle Universities Nuclear Laboratory rebuilt its shielded neutron source (SNS) with the goal of improving neutron beam collimation and reducing neutron and gamma-ray backgrounds. Neutrons are produced via the 2H(d,n)3He reaction and then collimated by heavy shielding to form a beam. The SNS has the ability to produce both a rectangular and circular neutron beam through use of two collimators with different beam apertures. Our work characterized both the neutron beam profiles as well as the neutron and gamma-ray backgrounds at various locations around the SNS. This characterization was performed to provide researchers who use the SNS with beam parameters necessary to plan and conduct an experiment. Vertical and horizontal beam profiles were measured at two different distances from the neutron production cell by scanning a small plastic scintillator across the face of the beam at various energies for each collimator. Background neutron and gamma-ray intensities were measured using time-of-flight techniques at 10 MeV and 16 MeV with the rectangular collimator. We present results on the position and size of neutron beam as well as on the structure and magnitude of the backgrounds.

  10. RSRP: A Robust Secure Routing Protocol in MANET

    Directory of Open Access Journals (Sweden)

    Sinha Ditipriya

    2014-05-01

    Full Text Available In this paper, we propose a novel algorithm RSRP to build a robust secure routing protocol in mobile ad-hoc networks (MANETs. This algorithm is based on some basic schemes such as RSA_CRT for encryption and decryption of messages; CRT for safety key generation, Shamir’s secret sharing principle for generation of secure routes. Those routes which are free from any malicious node and which belong to the set of disjoint routes between a source-destination pair are considered as probable routes. Shamir’s secret sharing principle is applied on those probable routes to obtain secure routes. Finally, most trustworthy and stable route is selected among those secure routes. Selection of the final route depends on some criteria of the nodes present in a route e.g.: battery power, mobility and trust value. In addition, complexity of key generation is reduced to a large extent by using RSA-CRT instead of RSA. In turn, the routing becomes less expensive and most secure and robust one. Performance of this routing protocol is then compared with non-secure routing protocols (AODV and DSR, secure routing scheme using secret sharing, security routing protocol using ZRP and SEAD depending on basic characteristics of these protocols. All such comparisons show that RSRP shows better performance in terms of computational cost, end-to-end delay and packet dropping in presence of malicious nodes in the MANET, keeping the overhead in terms of control packets same as other secure routing protocols.

  11. About the Design of QUIC Firefox Transport Protocol

    Directory of Open Access Journals (Sweden)

    Vraj Pandya

    2017-07-01

    Full Text Available QUIC (Quick UDP Internet Connections Chrome is an experimental transport layer network protocol designed by Jim Roskind at Google, initially implemented in 2012 and announced publicly in 2013. One of the QUIC's goals is to improve performance of connection-oriented web applications that are currently using the Transmission Control Protocol (TCP. To do that, QUIC achieves a reduced latency and a better stream-multiplexing support to avoid network congestion. In 2015, Firefox Mozilla started to work on an equivalent QUIC transport protocol for their browser. This idea was motivated by the differences between Chrome and Firefox. Despite the fact that Mozilla Firefox and Google Chrome are both web browser engines, there are some significant differences between them, such as file hierarchy, open source policies (Firefox is completely, while Chrome is only partial, tabs design, continuous integration, and more. Likewise QUIC Chrome, QUIC Firefox is a new multiplexed and secure transport based on User Datagram Protocol (UDP, designed from the ground up and optimized for Hyper-Text Transfer Protocol 2 (HTTP/2 semantics. While built with HTTP/2 as the primary application protocol, QUIC builds on decades of transport and security experience, and implements mechanisms that make it attractive as a modern general-purpose transport. In addition to describing the main design of QUIC Firefox, this paper will compare Firefox with QUIC Firefox. Our preliminary experimental results support that QUIC Firefox has a faster execution time, less latency time, and a better throughput time than the traditional Firefox.  

  12. A Novel Re-keying Function Protocol (NRFP For Wireless Sensor Network Security

    Directory of Open Access Journals (Sweden)

    Naif Alsharabi

    2008-12-01

    Full Text Available This paper describes a novel re-keying function protocol (NRFP for wireless sensor network security. A re-keying process management system for sensor networks is designed to support in-network processing. The design of the protocol is motivated by decentralization key management for wireless sensor networks (WSNs, covering key deployment, key refreshment, and key establishment. NRFP supports the establishment of novel administrative functions for sensor nodes that derive/re-derive a session key for each communication session. The protocol proposes direct connection, in-direct connection and hybrid connection. NRFP also includes an efficient protocol for local broadcast authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding in-network processing. Security and performance analysis shows that it is very efficient in computation, communication and storage and, that NRFP is also effective in defending against many sophisticated attacks.

  13. A Novel Re-keying Function Protocol (NRFP) For Wireless Sensor Network Security

    Science.gov (United States)

    Abdullah, Maan Younis; Hua, Gui Wei; Alsharabi, Naif

    2008-01-01

    This paper describes a novel re-keying function protocol (NRFP) for wireless sensor network security. A re-keying process management system for sensor networks is designed to support in-network processing. The design of the protocol is motivated by decentralization key management for wireless sensor networks (WSNs), covering key deployment, key refreshment, and key establishment. NRFP supports the establishment of novel administrative functions for sensor nodes that derive/re-derive a session key for each communication session. The protocol proposes direct connection, in-direct connection and hybrid connection. NRFP also includes an efficient protocol for local broadcast authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding innetwork processing. Security and performance analysis shows that it is very efficient in computation, communication and storage and, that NRFP is also effective in defending against many sophisticated attacks. PMID:27873963

  14. A Novel Re-keying Function Protocol (NRFP) For Wireless Sensor Network Security.

    Science.gov (United States)

    Abdullah, Maan Younis; Hua, Gui Wei; Alsharabi, Naif

    2008-12-04

    This paper describes a novel re-keying function protocol (NRFP) for wireless sensor network security. A re-keying process management system for sensor networks is designed to support in-network processing. The design of the protocol is motivated by decentralization key management for wireless sensor networks (WSNs), covering key deployment, key refreshment, and key establishment. NRFP supports the establishment of novel administrative functions for sensor nodes that derive/re-derive a session key for each communication session. The protocol proposes direct connection, in-direct connection and hybrid connection. NRFP also includes an efficient protocol for local broadcast authentication based on the use of one-way key chains. A salient feature of the authentication protocol is that it supports source authentication without precluding in-network processing. Security and performance analysis shows that it is very efficient in computation, communication and storage and, that NRFP is also effective in defending against many sophisticated attacks.

  15. Ocean Optics Protocols for Satellite Ocean Color Sensor Validation. Revised

    Science.gov (United States)

    Fargion, Giulietta S.; Mueller, James L.

    2000-01-01

    The document stipulates protocols for measuring bio-optical and radiometric data for the Sensor Intercomparison and Merger for Biological and Interdisciplinary Oceanic Studies (SIMBIOS) Project activities and algorithm development. This document supersedes the earlier version (Mueller and Austin 1995) published as Volume 25 in the SeaWiFS Technical Report Series. This document marks a significant departure from, and improvement on, theformat and content of Mueller and Austin (1995). The authorship of the protocols has been greatly broadened to include experts specializing in some key areas. New chapters have been added to provide detailed and comprehensive protocols for stability monitoring of radiometers using portable sources, abovewater measurements of remote-sensing reflectance, spectral absorption measurements for discrete water samples, HPLC pigment analysis and fluorometric pigment analysis. Protocols were included in Mueller and Austin (1995) for each of these areas, but the new treatment makes significant advances in each topic area. There are also new chapters prescribing protocols for calibration of sun photometers and sky radiance sensors, sun photometer and sky radiance measurements and analysis, and data archival. These topic areas were barely mentioned in Mueller and Austin (1995).

  16. Performances of different protocols for exocellular polysaccharides extraction from milk acid gels: Application to yogurt.

    Science.gov (United States)

    Nguyen, An Thi-Binh; Nigen, Michaël; Jimenez, Luciana; Ait-Abderrahim, Hassina; Marchesseau, Sylvie; Picart-Palmade, Laetitia

    2018-01-15

    Dextran or xanthan were used as model exocellular polysaccharides (EPS) to compare the extraction efficiency of EPS from skim milk acid gels using three different protocols. Extraction yields, residual protein concentrations and the macromolecular properties of extracted EPS were determined. For both model EPS, the highest extraction yield (∼80%) was obtained when samples were heated in acidic conditions at the first step of extraction (Protocol 1). Protocols that contained steps of acid/ethanol precipitation without heating (Protocols 2 and 3) show lower extraction yields (∼55%) but allow a better preservation of the EPS macromolecular properties. Changing the pH of acid gels up to 7 before extraction (Protocol 3) improved the extraction yield of anionic EPS without effect on the macromolecular properties of EPS. Protocol 1 was then applied for the quantification of EPS produced during the yogurt fermentation, while Protocol 3 was dedicated to their macromolecular characterization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. PROTOCOLS FOR INCREASING THE LIFETIME OF NODES OF AD HOC WIRELESS NETWORKS

    Directory of Open Access Journals (Sweden)

    B.Malarkodi

    2010-03-01

    Full Text Available Power consumption of nodes in ad hoc networks is a critical issue as they predominantly operate on batteries. In order to improve the lifetime of an ad hoc network, all the nodes must be utilized evenly and the power required for connections must be minimized. Energy management deals with the process of managing energy resources by means of controlling the battery discharge, adjusting the transmission power and scheduling of power sources so as to increase the lifetime of the nodes of an ad hoc wireless network. In this paper, two protocols are proposed to improve the lifetime of the nodes. The first protocol assumes smart battery packages with L cells and uses dynamic programming (DP to optimally select the set of cells used to satisfy a request for power. The second one proposes a MAC layer protocol denoted as Power Aware medium Access Control (PAMAC protocol which enables the network layer to select a route with minimum total power requirement among the possible routes between a source and a destination provided all nodes in the routes have battery capacity above a threshold. The life time of the nodes using the DP based scheduling policy is found through simulation and compared with that obtained using the techniques reported in the literature. It is found that DP based policy increases the lifetime of the mobile nodes by a factor of 1.15 to 1.8. The life expectancy, the average power consumption and throughput of the network using PAMAC protocol are computed through simulation and compared with that of the other MAC layer protocols 802.11, MACA, and CSMA. Besides this, the life expectancy and average power consumption of the network for different values of threshold are also compared. From the simulation results, it is observed that PAMAC consumes the least power and provides the longest lifetime among the various MAC Layer protocols. Moreover, using PAMAC as the MAC layer protocol, the performance obtained using different routing layer

  18. IDENTIFICATION AND CHARACTERIZATION OF FIVE NON- TRADITIONAL SOURCE CATEGORIES: CATASTROPHIC/ACCIDENTAL RELEASES, VEHICLE REPAIR FACILITIES, RECYCLING, PESTICIDE APPLICATION, AND AGRICULTURAL OPERATIONS

    Science.gov (United States)

    The report gives results of work that is part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plan (SIP) area source methodologies and to deve...

  19. Free-Space Quantum Key Distribution with a High Generation Rate Potassium Titanyl Phosphate Waveguide Photon-Pair Source

    Science.gov (United States)

    Wilson, Jeffrey D.; Chaffee, Dalton W.; Wilson, Nathaniel C.; Lekki, John D.; Tokars, Roger P.; Pouch, John J.; Roberts, Tony D.; Battle, Philip; Floyd, Bertram M.; Lind, Alexander J.; hide

    2016-01-01

    A high generation rate photon-pair source using a dual element periodically-poled potassium titanyl phosphate (PP KTP) waveguide is described. The fully integrated photon-pair source consists of a 1064-nanometer pump diode laser, fiber-coupled to a dual element waveguide within which a pair of 1064-nanometer photons are up-converted to a single 532-nanometer photon in the first stage. In the second stage, the 532-nanometer photon is down-converted to an entangled photon-pair at 800 nanometer and 1600 nanometer which are fiber-coupled at the waveguide output. The photon-pair source features a high pair generation rate, a compact power-efficient package, and continuous wave (CW) or pulsed operation. This is a significant step towards the long term goal of developing sources for high-rate Quantum Key Distribution (QKD) to enable Earth-space secure communications. Characterization and test results are presented. Details and preliminary results of a laboratory free-space QKD experiment with the B92 protocol are also presented.

  20. Characterization of the main error sources of chromatic confocal probes for dimensional measurement

    International Nuclear Information System (INIS)

    Nouira, H; El-Hayek, N; Yuan, X; Anwer, N

    2014-01-01

    Chromatic confocal probes are increasingly used in high-precision dimensional metrology applications such as roughness, form, thickness and surface profile measurements; however, their measurement behaviour is not well understood and must be characterized at a nanometre level. This paper provides a calibration bench for the characterization of two chromatic confocal probes of 20 and 350 µm travel ranges. The metrology loop that includes the chromatic confocal probe is stable and enables measurement repeatability at the nanometer level. With the proposed system, the major error sources, such as the relative axial and radial motions of the probe with respect to the sample, the material, colour and roughness of the measured sample, the relative deviation/tilt of the probe and the scanning speed are identified. Experimental test results show that the chromatic confocal probes are sensitive to these errors and that their measurement behaviour is highly dependent on them. (paper)

  1. Open Source Vulnerability Database Project

    Directory of Open Access Journals (Sweden)

    Jake Kouns

    2008-06-01

    Full Text Available This article introduces the Open Source Vulnerability Database (OSVDB project which manages a global collection of computer security vulnerabilities, available for free use by the information security community. This collection contains information on known security weaknesses in operating systems, software products, protocols, hardware devices, and other infrastructure elements of information technology. The OSVDB project is intended to be the centralized global open source vulnerability collection on the Internet.

  2. An efficient multi-carrier position-based packet forwarding protocol for wireless sensor networks

    KAUST Repository

    Bader, Ahmed

    2012-01-01

    Beaconless position-based forwarding protocols have recently evolved as a promising solution for packet forwarding in wireless sensor networks. However, as the node density grows, the overhead incurred in the process of relay selection grows significantly. As such, end-to-end performance in terms of energy and latency is adversely impacted. With the motivation of developing a packet forwarding mechanism that is tolerant to variation in node density, an alternative position-based protocol is proposed in this paper. In contrast to existing beaconless protocols, the proposed protocol is designed such that it eliminates the need for potential relays to undergo a relay selection process. Rather, any eligible relay may decide to forward the packet ahead, thus significantly reducing the underlying overhead. The operation of the proposed protocol is empowered by exploiting favorable features of orthogonal frequency division multiplexing (OFDM) at the physical layer. The end-to-end performance of the proposed protocol is evaluated against existing beaconless position-based protocols analytically and as well by means of simulations. The proposed protocol is demonstrated in this paper to be more efficient. In particular, it is shown that for the same amount of energy the proposed protocol transports one bit from source to destination much quicker. © 2012 IEEE.

  3. Comparison of low- and ultralow-dose computed tomography protocols for quantitative lung and airway assessment.

    Science.gov (United States)

    Hammond, Emily; Sloan, Chelsea; Newell, John D; Sieren, Jered P; Saylor, Melissa; Vidal, Craig; Hogue, Shayna; De Stefano, Frank; Sieren, Alexa; Hoffman, Eric A; Sieren, Jessica C

    2017-09-01

    Quantitative computed tomography (CT) measures are increasingly being developed and used to characterize lung disease. With recent advances in CT technologies, we sought to evaluate the quantitative accuracy of lung imaging at low- and ultralow-radiation doses with the use of iterative reconstruction (IR), tube current modulation (TCM), and spectral shaping. We investigated the effect of five independent CT protocols reconstructed with IR on quantitative airway measures and global lung measures using an in vivo large animal model as a human subject surrogate. A control protocol was chosen (NIH-SPIROMICS + TCM) and five independent protocols investigating TCM, low- and ultralow-radiation dose, and spectral shaping. For all scans, quantitative global parenchymal measurements (mean, median and standard deviation of the parenchymal HU, along with measures of emphysema) and global airway measurements (number of segmented airways and pi10) were generated. In addition, selected individual airway measurements (minor and major inner diameter, wall thickness, inner and outer area, inner and outer perimeter, wall area fraction, and inner equivalent circle diameter) were evaluated. Comparisons were made between control and target protocols using difference and repeatability measures. Estimated CT volume dose index (CTDIvol) across all protocols ranged from 7.32 mGy to 0.32 mGy. Low- and ultralow-dose protocols required more manual editing and resolved fewer airway branches; yet, comparable pi10 whole lung measures were observed across all protocols. Similar trends in acquired parenchymal and airway measurements were observed across all protocols, with increased measurement differences using the ultralow-dose protocols. However, for small airways (1.9 ± 0.2 mm) and medium airways (5.7 ± 0.4 mm), the measurement differences across all protocols were comparable to the control protocol repeatability across breath holds. Diameters, wall thickness, wall area fraction

  4. Comparing the use of the Childhood Autism Rating Scale and the Autism Behavior Checklist protocols to identify and characterize autistic individuals.

    Science.gov (United States)

    Santos, Thaís Helena Ferreira; Barbosa, Milene Rossi Pereira; Pimentel, Ana Gabriela Lopes; Lacerda, Camila Andrioli; Balestro, Juliana Izidro; Amato, Cibelle Albuquerque de la Higuera; Fernandes, Fernanda Dreux Miranda

    2012-01-01

    To compare the results obtained in the Autism Behavior Checklist with those obtained in the Childhood Autism Rating Scale to identify and characterize children with Autism Spectrum Disorders. Participants were 28 children with psychiatric diagnosis within the autism spectrum that were enrolled in language therapy in a specialized service. These children were assessed according to the Autism Behavior Checklist and Childhood Autism Rating Scale criteria, based on information obtained with parents and therapists, respectively. Data were statistically analyzed regarding the agreement between responses. Results indicating high or moderate probability of autism in the Autism Behavior Checklist were considered concordant with the results indicating mild-to-moderate or severe autism in the Childhood Autism Rating Scale. Results indicating low probability of autism in the Autism Behavior Checklist and without autism in the Childhood Autism Rating Scale were also considered concordant. There was agreement on most of the responses. Cases in which there was disagreement between results obtained on both protocols corroborate literature data, showing that the instruments may not be sufficient, if applied alone, to define the diagnosis. The Childhood Autism Rating Scale may not effectively diagnose autistic children, while the Autism Behavior Checklist may result in over- diagnose, including within the autism spectrum children with other disorders. Therefore, the associated use of both protocols is recommended.

  5. Characterization of Niobium Oxide Films Deposited by High Target Utilization Sputter Sources

    International Nuclear Information System (INIS)

    Chow, R; Ellis, A D; Loomis, G E; Rana, S I

    2007-01-01

    High quality, refractory metal, oxide coatings are required in a variety of applications such as laser optics, micro-electronic insulating layers, nano-device structures, electro-optic multilayers, sensors and corrosion barriers. A common oxide deposition technique is reactive sputtering because the kinetic mechanism vaporizes almost any solid material in vacuum. Also, the sputtered molecules have higher energies than those generated from thermal evaporation, and so the condensates are smoother and denser than those from thermally-evaporated films. In the typical sputtering system, target erosion is a factor that drives machine availability. In some situations such as nano-layered capacitors, where the device's performance characteristics depends on thick layers, target life becomes a limiting factor on the maximizing device functionality. The keen interest to increase target utilization in sputtering has been addressed in a variety of ways such as target geometry, rotating magnets, and/or shaped magnet arrays. Also, a recent sputtering system has been developed that generates a high density plasma, directs the plasma beam towards the target in a uniform fashion, and erodes the target in a uniform fashion. The purpose of this paper is to characterize and compare niobia films deposited by two types of high target utilization sputtering sources, a rotating magnetron and a high density plasma source. The oxide of interest in this study is niobia because of its high refractive index. The quality of the niobia films were characterized spectroscopically in optical transmission, ellipsometrically, and chemical stoichiometry with X-ray photo-electron spectroscopy. The refractive index, extinction coefficients, Cauchy constants were derived from the ellipsometric modeling. The mechanical properties of coating density and stress are also determined

  6. Characterization and source identification of fine particulate matter in urban Beijing during the 2015 Spring Festival.

    Science.gov (United States)

    Ji, Dongsheng; Cui, Yang; Li, Liang; He, Jun; Wang, Lili; Zhang, Hongliang; Wang, Wan; Zhou, Luxi; Maenhaut, Willy; Wen, Tianxue; Wang, Yuesi

    2018-07-01

    The Spring Festival (SF) is the most important holiday in China for family reunion and tourism. During the 2015 SF an intensive observation campaign of air quality was conducted to study the impact of the anthropogenic activities and the dynamic characteristics of the sources. During the study period, pollution episodes frequently occurred with 12days exceeding the Chinese Ambient Air Quality Standards for 24-h average PM 2.5 (75μg/m 3 ), even 8days with exceeding 150μg/m 3 . The daily maximum PM 2.5 concentration reached 350μg/m 3 while the hourly minimum visibility was <0.8km. Three pollution episodes were selected for detailed analysis including chemical characterization and diurnal variation of the PM 2.5 and its chemical composition, and sources were identified using the Positive Matrix Factorization model. The first episode occurring before the SF was characterized by more formation of SO 4 2- and NO 3 - and high crustal enrichment factors for Ag, As, Cd, Cu, Hg, Pb, Se and Zn and seven categories of pollution sources were identified, whereby vehicle emission contributed 38% to the PM 2.5 . The second episode occurring during the SF was affected heavily by large-scale firework emissions, which led to a significant increase in SO 4 2- , Cl - , OC, K and Ba; these emissions were the largest contributor to the PM 2.5 accounting for 36%. During the third episode occurring after the SF, SO 4 2- , NO 3 - , NH 4 + and OC were the major constituents of the PM 2.5 and the secondary source was the dominant source with a contribution of 46%. The results provide a detailed understanding on the variation in occurrence, chemical composition and sources of the PM 2.5 as well as of the gaseous pollutants affected by the change in anthropogenic activities in Beijing throughout the SF. They highlight the need for limiting the firework emissions during China's most important traditional festival. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Identification of a research protocol to study orthodontic tooth movement

    Directory of Open Access Journals (Sweden)

    Annalisa Dichicco

    2014-06-01

    Full Text Available Aim: The orthodontic movement is associated with a process of tissue remodeling together with the release of several chemical mediators in periodontal tissues. Each mediator is a potential marker of tooth movement and expresses biological processes as: tissue inflammation and bone remodeling. Different amounts of every mediator are present in several tissues and fluids of the oral cavity. Therefore, there are different methods that allow sampling with several degrees of invasiveness. Chemical mediators are also substances of different molecular nature, and multiple kind of analysis methods allow detection. The purpose of this study was to draft the best research protocol for an optimal study on orthodontic movement efficiency. Methods: An analysis of the international literature have been made, to identify the gold standard of each aspect of the protocol: type of mediator, source and method of sampling and analysis method. Results: From the analysis of the international literature was created an original research protocol for the study and the assessment of the orthodontic movement, by using the biomarkers of the tooth movement. Conclusions: The protocol created is based on the choice of the gold standard of every aspect already analyzed in the literature and in existing protocols for the monitoring of orthodontic tooth movement through the markers of tooth movement. Clinical trials re required for the evaluation and validation of the protocol created.

  8. The Development and Application of Spatiotemporal Metrics for the Characterization of Point Source FFCO2 Emissions and Dispersion

    Science.gov (United States)

    Roten, D.; Hogue, S.; Spell, P.; Marland, E.; Marland, G.

    2017-12-01

    There is an increasing role for high resolution, CO2 emissions inventories across multiple arenas. The breadth of the applicability of high-resolution data is apparent from their use in atmospheric CO2 modeling, their potential for validation of space-based atmospheric CO2 remote-sensing, and the development of climate change policy. This work focuses on increasing our understanding of the uncertainty in these inventories and the implications on their downstream use. The industrial point sources of emissions (power generating stations, cement manufacturing plants, paper mills, etc.) used in the creation of these inventories often have robust emissions characteristics, beyond just their geographic location. Physical parameters of the emission sources such as number of exhaust stacks, stack heights, stack diameters, exhaust temperatures, and exhaust velocities, as well as temporal variability and climatic influences can be important in characterizing emissions. Emissions from large point sources can behave much differently than emissions from areal sources such as automobiles. For many applications geographic location is not an adequate characterization of emissions. This work demonstrates the sensitivities of atmospheric models to the physical parameters of large point sources and provides a methodology for quantifying parameter impacts at multiple locations across the United States. The sensitivities highlight the importance of location and timing and help to highlight potential aspects that can guide efforts to reduce uncertainty in emissions inventories and increase the utility of the models.

  9. Biomass to energy : GHG reduction quantification protocols and case study

    Energy Technology Data Exchange (ETDEWEB)

    Reusing, G.; Taylor, C. [Conestoga - Rovers and Associates, Waterloo, ON (Canada); Nolan, W. [Liberty Energy, Hamilton, ON (Canada); Kerr, G. [Index Energy, Ajax, ON (Canada)

    2009-07-01

    With the growing concerns over greenhouses gases and their contribution to climate change, it is necessary to find ways of reducing environmental impacts by diversifying energy sources to include non-fossil fuel energy sources. Among the fastest growing green energy sources is energy from waste facilities that use biomass that would otherwise be landfilled or stockpiled. The quantification of greenhouse gas reductions through the use of biomass to energy systems can be calculated using various protocols and methodologies. This paper described each of these methodologies and presented a case study comparing some of these quantification methodologies. A summary and comparison of biomass to energy greenhouse gas reduction protocols in use or under development by the United Nations, the European Union, the Province of Alberta and Environment Canada was presented. It was concluded that regulatory, environmental pressures, and public policy will continue to impact the practices associated with biomass processing or landfill operations, such as composting, or in the case of landfills, gas collection systems, thus reducing the amount of potential credit available for biomass to energy facility offset projects. 10 refs., 2 tabs., 6 figs.

  10. Biomass to energy : GHG reduction quantification protocols and case study

    International Nuclear Information System (INIS)

    Reusing, G.; Taylor, C.; Nolan, W.; Kerr, G.

    2009-01-01

    With the growing concerns over greenhouses gases and their contribution to climate change, it is necessary to find ways of reducing environmental impacts by diversifying energy sources to include non-fossil fuel energy sources. Among the fastest growing green energy sources is energy from waste facilities that use biomass that would otherwise be landfilled or stockpiled. The quantification of greenhouse gas reductions through the use of biomass to energy systems can be calculated using various protocols and methodologies. This paper described each of these methodologies and presented a case study comparing some of these quantification methodologies. A summary and comparison of biomass to energy greenhouse gas reduction protocols in use or under development by the United Nations, the European Union, the Province of Alberta and Environment Canada was presented. It was concluded that regulatory, environmental pressures, and public policy will continue to impact the practices associated with biomass processing or landfill operations, such as composting, or in the case of landfills, gas collection systems, thus reducing the amount of potential credit available for biomass to energy facility offset projects. 10 refs., 2 tabs., 6 figs

  11. Improved assessment of mediastinal and pulmonary pathologies in combined staging CT examinations using a fast-speed acquisition dual-source CT protocol

    Energy Technology Data Exchange (ETDEWEB)

    Braun, Franziska M.; Holzner, Veronica; Meinel, Felix G.; Armbruster, Marco; Brandlhuber, Martina; Ertl-Wagner, Birgit; Sommer, Wieland H. [University Hospital Munich, Institute for Clinical Radiology, Munich (Germany)

    2017-12-15

    To demonstrate the feasibility of fast Dual-Source CT (DSCT) and to evaluate the clinical utility in chest/abdomen/pelvis staging CT studies. 45 cancer patients with two follow-up combined chest/abdomen/pelvis staging CT examinations (maximally ±10 kV difference in tube potential) were included. The first scan had to be performed with our standard protocol (fixed pitch 0.6), the second one using a novel fast-speed DSCT protocol (fixed pitch 1.55). Effective doses (ED) were calculated, noise measurements performed. Scan times were compared, motion artefacts and the diagnostic confidence rated in consensus reading. ED for the standard and fast-speed scans was 9.1 (7.0-11.1) mSv and 9.2 (7.4-12.8) mSv, respectively (P = 0.075). Image noise was comparable (abdomen; all P > 0.05) or reduced for fast-speed CTs (trachea, P = 0.001; ascending aorta, P < 0.001). Motion artefacts of the heart/the ascending aorta (all P < 0.001) and breathing artefacts (P < 0.031) were reduced in fast DSCT. The diagnostic confidence for the evaluation of mediastinal (P < 0.001) and pulmonary (P = 0.008) pathologies was improved for fast DSCT. Fast DSCT for chest/abdomen/pelvis staging CT examinations is performed within 2 seconds scan time and eliminates relevant intrathoracic motion/breathing artefacts. Mediastinal/pulmonary pathologies can thus be assessed with high diagnostic confidence. Abdominal image quality remains excellent. (orig.)

  12. Formal description of the jumpstart just-in-time signaling protocol using EFSM

    Science.gov (United States)

    Zaim, A. H.; Baldine, Ilia; Cassada, Mark; Rouskas, George N.; Perros, Harry G.; Stevenson, Daniel S.

    2002-07-01

    We present a formal protocol description for a Just-In-Time (JIT) signaling scheme running over a core dWDM network which utilizes Optical Burst Switches (OBS). We apply an eight-tuple extended finite state machine (EFSM) model to formally specify the protocol. Using the EFSM model, we define the communication between a source client node and a destination client node through an ingress and one or multiple intermediate switches. We worked on single burst connections that means setting up the connection just before sending a single burst and then closing the connection as soon as the burst is sent. The communication between the EFSMs is handled through message transfer between protocol entities.

  13. Literature overview highlights lack of paediatric donation protocols but identifies common themes that could guide their development.

    Science.gov (United States)

    Vileito, A; Siebelink, M J; Verhagen, Aae

    2018-05-01

    Paediatric donation is a unique and extremely sensitive process that requires specific knowledge and competencies. Most countries use protocols for organ and tissue donation to ensure optimal care for the donor and family, but these mainly focus on adults. However, the donation process for children differs from adults in many ways. An overview of the literature was performed to identify protocols for the paediatric population. PubMed, Web of Science, EMBASE and the Internet were searched up to March 2016 for papers or other sources in English related to specific organ and tissue donation protocols for children and neonates. This comprised title, abstract and then full-text screening of relevant data. We included 12 papers and two electronic sources that were mainly from North America and Europe. Most discussed donations after cardiac death. The recurring themes included identifying potential donors, approaching parents, palliative care and collaboration with organ procurement organisations. Most papers called for paediatric donation policies to be standardised. Scientific publications in English on paediatric donation protocols are very scarce. No comprehensive paediatric donation protocol was found. We identified several recurring themes in the literature that could be used to develop such protocols. ©2018 The Authors. Acta Paediatrica published by John Wiley & Sons Ltd on behalf of Foundation Acta Paediatrica.

  14. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    International Nuclear Information System (INIS)

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  15. On the ARQ protocols over the Z-interference channels: Diversity-multiplexing-delay tradeoff

    KAUST Repository

    Nafea, Mohamed S.; Hamza, D.; Seddik, Karim G.; Nafie, Mohamed; Gamal, Hesham El

    2012-01-01

    We characterize the achievable three-dimensional tradeoff between diversity, multiplexing, and delay of the single antenna Automatic Retransmission reQuest (ARQ) Z-interference channel. Non-cooperative and cooperative ARQ protocols are adopted under

  16. Multi-protocol header generation system

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, David A.; Ignatowski, Michael; Jayasena, Nuwan; Loh, Gabriel

    2017-09-05

    A communication device includes a data source that generates data for transmission over a bus, and a data encoder that receives and encodes outgoing data. An encoder system receives outgoing data from a data source and stores the outgoing data in a first queue. An encoder encodes outgoing data with a header type that is based upon a header type indication from a controller and stores the encoded data that may be a packet or a data word with at least one layered header in a second queue for transmission. The device is configured to receive at a payload extractor, a packet protocol change command from the controller and to remove the encoded data and to re-encode the data to create a re-encoded data packet and placing the re-encoded data packet in the second queue for transmission.

  17. Protocol adaptations to conduct Systematic Literature Reviews in Software Engineering: a chronological study

    Directory of Open Access Journals (Sweden)

    Samuel Sepúlveda

    2015-09-01

    Full Text Available Systematic literature reviews (SLR have reached a considerable level of adoption in Software Engineering (SE, how-ever protocol adaptations for its implementation remain tangentially addressed. This work provides a chronological framework for the use and adaptation of the SLR protocol, including its current status. A systematic literature search was performed, reviewing a set of twelve articles being selected in accordance with the inclusion and exclusion criteria between 2004 and 2013, using digital data sources recognized by the SE community. A chronological framework is provided that includes the current state of the protocol adaptations to conduct SLR in SE. The results indicate areas where the quantity and quality of investigations needs to be increased and the identi- fication of the main proposals providing adaptations for the protocol conducting SLR in SE.

  18. Characterization of source rocks and groundwater radioactivity at the Chihuahua valley

    Energy Technology Data Exchange (ETDEWEB)

    Renteria V, M.; Montero C, M.E.; Reyes C, M.; Herrera P, E.F.; Valenzuela H, M. [Centro de lnvestigacion en Materiales Avanzados, Miguel de Cervantes 120, 31109 Chihuahua, (Mexico); Rodriguez P, A. [World Wildlife Fund (WWF), Chihuahuan Desert Program, Coronado 1005, 31000 Chihuahua (Mexico); Manjon C, G.; Garcia T, R. [Universidad de Sevilla, Departamento de Fisica Aplicada 11, ETS Arquitectura, Av. Reina Mercedes 2, 41012 Sevilla, (Spain); Crespo, T. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas (CIEMAT), Av. Complutense 22, 28040 Madrid, (Spain)]. e-mail: elena.montero@cimav.edu.mx

    2007-07-01

    As part of a scientific research project about alpha radioactivity in groundwater for human consumption at the Chihuahua City, the characterization of rock sources of radioactivity around de Chihuahua valley was developed. The radioactivity of groundwater and sediments was determined, too. The radioactivity of uranium- and thorium- series isotopes contained in rocks was obtained by high resolution gamma-ray spectroscopy. Some representative values are 50 Bq/kg for the mean value of Bi-214 activity, and 121.5 Bq/kg for the highest value at West of the city. The activity of sediments, extracted during wells perforation, was determined using a Nal(TI) detector. A non-reported before uranium ore was localized at the San Marcos range formation. Its outcrops are inside the Chihuahua-Sacramento valley basin and its activity characterization was performed. Unusually high specific uranium activities, determined by alpha spectrometry, were obtained in water, plants, sediments and fish extracted at locations close to outcrops of uranium minerals. The activity of water of the San Marcos dam reached 7.7 Bq/L. The activity of fish, trapped at San Marcos dam, is 0.99 Bq/kg. Conclusions about the contamination of groundwater at North of Chihuahua City were obtained. (Author)

  19. Characterization of source rocks and groundwater radioactivity at the Chihuahua valley

    International Nuclear Information System (INIS)

    Renteria V, M.; Montero C, M.E.; Reyes C, M.; Herrera P, E.F.; Valenzuela H, M.; Rodriguez P, A.; Manjon C, G.; Garcia T, R.; Crespo, T.

    2007-01-01

    As part of a scientific research project about alpha radioactivity in groundwater for human consumption at the Chihuahua City, the characterization of rock sources of radioactivity around de Chihuahua valley was developed. The radioactivity of groundwater and sediments was determined, too. The radioactivity of uranium- and thorium- series isotopes contained in rocks was obtained by high resolution gamma-ray spectroscopy. Some representative values are 50 Bq/kg for the mean value of Bi-214 activity, and 121.5 Bq/kg for the highest value at West of the city. The activity of sediments, extracted during wells perforation, was determined using a Nal(TI) detector. A non-reported before uranium ore was localized at the San Marcos range formation. Its outcrops are inside the Chihuahua-Sacramento valley basin and its activity characterization was performed. Unusually high specific uranium activities, determined by alpha spectrometry, were obtained in water, plants, sediments and fish extracted at locations close to outcrops of uranium minerals. The activity of water of the San Marcos dam reached 7.7 Bq/L. The activity of fish, trapped at San Marcos dam, is 0.99 Bq/kg. Conclusions about the contamination of groundwater at North of Chihuahua City were obtained. (Author)

  20. EPA's Radioactive Source Program

    International Nuclear Information System (INIS)

    Kopsick, D.

    2004-01-01

    The US EPA is the lead Federal agency for emergency responses to unknown radiological materials, not licensed, owned or operated by a Federal agency or an Agreement state (Federal Radiological Emergency Response Plan, 1996). The purpose of EPA's clean materials programme is to keep unwanted and unregulated radioactive material out of the public domain. This is achieved by finding and securing lost sources, maintaining control of existing sources and preventing future losses. The focus is on both, domestic and international fronts. The domestic program concentrates on securing lost sources, preventing future losses, alternative technologies like tagging of radioactive sources in commerce, pilot radioactive source roundup, training programs, scrap metal and metal processing facilities, the demolition industry, product stewardship and alternatives to radioactive devices (fewer radioactive source devices means fewer orphan sources). The international program consists of securing lost sources, preventing future losses, radiation monitoring of scrap metal at ports and the international scrap metal monitoring protocol

  1. Application of low-dose radiation protocols in survey CT scans

    International Nuclear Information System (INIS)

    Fu Qiang; Liu Ting; Lu Tao; Xu Ke; Zhang Lin

    2009-01-01

    Objective: To characterize the protocols with low-dose radiation in survey CT scans for localization. Methods: Eighty standard adult patients, head and body phantoms were recruited. Default protocols provided by operator's manual setting were that all the tube voltage for head, chest, abdomen and lumbar was 120 kV; the tube currents were 20,10,20 and 40 mA, respectively. Values of kV and mA in the low-dose experiments were optimized according to the device options. For chest and abdomen, the tube position were compared between default (0 degree) and 180 degree. Phantoms were scanned with above protocols, and the radiation doses were measured respectively. Paired t-test were used for comparisons of standard deviation in CT value, noise and exposure surface dose (ESD) between group with default protocols and group with optimized protocols. Results: The optimized protocols in low-dose CT survey scans were 80 kV, 10 mA for head, 80 kV, 10 mA for chest, 80 kV, 10 mA for abdomen and 100 kV, 10 mA for lumbar. The values of ESD for phantom scan in default and optimized protocols were 0.38 mGy/0.16 mGy in head, 0.30 mGy/0.20 mGy in chest, 0.74 mGy/0.30 mGy in abdomen and 0.81 mGy/0.44 mGy in lumbar, respectively. Compared with default protocols, the optimized protocols reduced the radiation doses 59%, 33%, 59% and 46% in head, chest, abdomen and lumbar. When tube position changed from 0 degree to 180 degree, the ESD were 0.24 mGy/0.20 mGy for chest; 0.37 mGy/0.30 mGy for abdomen, and the radiation doses were reduced 20% and 17%. Conclusion: A certain amount of image noise is increased in low-dose protocols, but image quality is still acceptable without problem in CT localization. The reduction of radiation dose and the radiation harm to patients are the superiority. (authors)

  2. Fugitive emission source characterization using a gradient-based optimization scheme and scalar transport adjoint

    Science.gov (United States)

    Brereton, Carol A.; Joynes, Ian M.; Campbell, Lucy J.; Johnson, Matthew R.

    2018-05-01

    Fugitive emissions are important sources of greenhouse gases and lost product in the energy sector that can be difficult to detect, but are often easily mitigated once they are known, located, and quantified. In this paper, a scalar transport adjoint-based optimization method is presented to locate and quantify unknown emission sources from downstream measurements. This emission characterization approach correctly predicted locations to within 5 m and magnitudes to within 13% of experimental release data from Project Prairie Grass. The method was further demonstrated on simulated simultaneous releases in a complex 3-D geometry based on an Alberta gas plant. Reconstructions were performed using both the complex 3-D transient wind field used to generate the simulated release data and using a sequential series of steady-state RANS wind simulations (SSWS) representing 30 s intervals of physical time. Both the detailed transient and the simplified wind field series could be used to correctly locate major sources and predict their emission rates within 10%, while predicting total emission rates from all sources within 24%. This SSWS case would be much easier to implement in a real-world application, and gives rise to the possibility of developing pre-computed databases of both wind and scalar transport adjoints to reduce computational time.

  3. Multi-year microbial source tracking study characterizing fecal contamination in an urban watershed

    Science.gov (United States)

    Bushon, Rebecca N.; Brady, Amie M. G.; Christensen, Eric D.; Stelzer, Erin A.

    2017-01-01

    Microbiological and hydrological data were used to rank tributary stream contributions of bacteria to the Little Blue River in Independence, Missouri. Concentrations, loadings and yields of E. coli and microbial source tracking (MST) markers, were characterized during base flow and storm events in five subbasins within Independence, as well as sources entering and leaving the city through the river. The E. coli water quality threshold was exceeded in 29% of base-flow and 89% of storm-event samples. The total contribution of E. coli and MST markers from tributaries within Independence to the Little Blue River, regardless of streamflow, did not significantly increase the median concentrations leaving the city. Daily loads and yields of E. coli and MST markers were used to rank the subbasins according to their contribution of each constituent to the river. The ranking methodology used in this study may prove useful in prioritizing remediation in the different subbasins.

  4. A measurement-based X-ray source model characterization for CT dosimetry computations.

    Science.gov (United States)

    Sommerville, Mitchell; Poirier, Yannick; Tambasco, Mauro

    2015-11-08

    The purpose of this study was to show that the nominal peak tube voltage potential (kVp) and measured half-value layer (HVL) can be used to generate energy spectra and fluence profiles for characterizing a computed tomography (CT) X-ray source, and to validate the source model and an in-house kV X-ray dose computation algorithm (kVDoseCalc) for computing machine- and patient-specific CT dose. Spatial variation of the X-ray source spectra of a Philips Brilliance and a GE Optima Big Bore CT scanner were found by measuring the HVL along the direction of the internal bow-tie filter axes. Third-party software, Spektr, and the nominal kVp settings were used to generate the energy spectra. Beam fluence was calculated by dividing the integral product of the spectra and the in-air NIST mass-energy attenuation coefficients by in-air dose measurements along the filter axis. The authors found the optimal number of photons to seed in kVDoseCalc to achieve dose convergence. The Philips Brilliance beams were modeled for 90, 120, and 140 kVp tube settings. The GE Optima beams were modeled for 80, 100, 120, and 140 kVp tube settings. Relative doses measured using a Capintec Farmer-type ionization chamber (0.65 cc) placed in a cylindrical polymethyl methacrylate (PMMA) phantom and irradiated by the Philips Brilliance, were compared to those computed with kVDoseCalc. Relative doses in an anthropomorphic thorax phantom (E2E SBRT Phantom) irradiated by the GE Optima were measured using a (0.015 cc) PTW Freiburg ionization chamber and compared to computations from kVDoseCalc. The number of photons required to reduce the average statistical uncertainty in dose to measurement over all 12 PMMA phantom positions was found to be 1.44%, 1.47%, and 1.41% for 90, 120, and 140 kVp, respectively. The maximum percent difference between calculation and measurement for all energies, measurement positions, and phantoms was less than 3.50%. Thirty-five out of a total of 36 simulation conditions were

  5. The self-describing data sets file protocol and Toolkit

    International Nuclear Information System (INIS)

    Borland, M.; Emery, L.

    1995-01-01

    The Self-Describing Data Sets (SDDS) file protocol continues to be used extensively in commissioning the Advanced Photon Source (APS) accelerator complex. SDDS protocol has proved useful primarily due to the existence of the SDDS Toolkit, a growing set of about 60 generic commandline programs that read and/or write SDDS files. The SDDS Toolkit is also used extensively for simulation postprocessing, giving physicists a single environment for experiment and simulation. With the Toolkit, new SDDS data is displayed and subjected to complex processing without developing new programs. Data from EPICS, lab instruments, simulation, and other sources are easily integrated. Because the SDDS tools are commandline-based, data processing scripts are readily written using the user's preferred shell language. Since users work within a UNIX shell rather than an application-specific shell or GUI, they may add SDDS-compliant programs and scripts to their personal toolkits without restriction or complication. The SDDS Toolkit has been run under UNIX on SUN OS4, HP-UX, and LINUX. Application of SDDS to accelerator operation is being pursued using Tcl/Tk to provide a GUI

  6. Direct data access protocols benchmarking on DPM

    Science.gov (United States)

    Furano, Fabrizio; Devresse, Adrien; Keeble, Oliver; Mancinelli, Valentina

    2015-12-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring information about any data access protocol to the same monitoring infrastructure that is used to monitor the Xrootd deployments. Our goal is to evaluate under which circumstances the HTTP-based protocols can be good enough for batch or interactive data access. In this contribution we show and discuss the results that our test systems have collected under the circumstances that include ROOT analyses using TTreeCache and stress tests on the metadata performance.

  7. Biplane interventional pediatric system with cone-beam CT: dose and image quality characterization for the default protocols.

    Science.gov (United States)

    Corredoira, Eva; Vañó, Eliseo; Alejo, Luis; Ubeda, Carlos; Gutiérrez-Larraya, Federico; Garayoa, Julia

    2016-07-08

    The aim of this study was to assess image quality and radiation dose of a biplane angiographic system with cone-beam CT (CBCT) capability tuned for pediatric cardiac procedures. The results of this study can be used to explore dose reduction techniques. For pulsed fluoroscopy and cine modes, polymethyl methacrylate phantoms of various thicknesses and a Leeds TOR 18-FG test object were employed. Various fields of view (FOV) were selected. For CBCT, the study employed head and body dose phantoms, Catphan 504, and an anthropomorphic cardiology phantom. The study also compared two 3D rotational angiography protocols. The entrance surface air kerma per frame increases by a factor of 3-12 when comparing cine and fluoroscopy frames. The biggest difference in the signal-to- noise ratio between fluoroscopy and cine modes occurs at FOV 32 cm because fluoroscopy is acquired at a 1440 × 1440 pixel matrix size and in unbinned mode, whereas cine is acquired at 720 × 720 pixels and in binned mode. The high-contrast spatial resolution of cine is better than that of fluoroscopy, except for FOV 32 cm, because fluoroscopy mode with 32 cm FOV is unbinned. Acquiring CBCT series with a 16 cm head phantom using the standard dose protocol results in a threefold dose increase compared with the low-dose protocol. Although the amount of noise present in the images acquired with the low-dose protocol is much higher than that obtained with the standard mode, the images present better spatial resolution. A 1 mm diameter rod with 250 Hounsfield units can be distinguished in reconstructed images with an 8 mm slice width. Pediatric-specific protocols provide lower doses while maintaining sufficient image quality. The system offers a novel 3D imaging mode. The acquisition of CBCT images results in increased doses administered to the patients, but also provides further diagnostic information contained in the volumetric images. The assessed CBCT protocols provide images that are noisy, but with

  8. Waste Characterization Methods

    Energy Technology Data Exchange (ETDEWEB)

    Vigil-Holterman, Luciana R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Naranjo, Felicia Danielle [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-02

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream’s generation, characterization, and management; and not merely a list of information sources.

  9. Waste Characterization Methods

    International Nuclear Information System (INIS)

    Vigil-Holterman, Luciana R.; Naranjo, Felicia Danielle

    2016-01-01

    This report discusses ways to classify waste as outlined by LANL. Waste Generators must make a waste determination and characterize regulated waste by appropriate analytical testing or use of acceptable knowledge (AK). Use of AK for characterization requires several source documents. Waste characterization documentation must be accurate, sufficient, and current (i.e., updated); relevant and traceable to the waste stream's generation, characterization, and management; and not merely a list of information sources.

  10. Characterization of polar organic compounds and source analysis of fine organic aerosols in Hong Kong

    Science.gov (United States)

    Li, Yunchun

    Organic aerosols, as an important fraction of airborne particulate mass, significantly affect the environment, climate, and human health. Compared with inorganic species, characterization of individual organic compounds is much less complete and comprehensive because they number in thousands or more and are diverse in chemical structures. The source contributions of organic aerosols are far from being well understood because they can be emitted from a variety of sources as well as formed from photochemical reactions of numerous precursors. This thesis work aims to improve the characterization of polar organic compounds and source apportionment analysis of fine organic carbon (OC) in Hong Kong, which consists of two parts: (1) An improved analytical method to determine monocarboxylic acids, dicarboxylic acids, ketocarboxylic acids, and dicarbonyls collected on filter substrates has been established. These oxygenated compounds were determined as their butyl ester or butyl acetal derivatives using gas chromatography-mass spectrometry. The new method made improvements over the original Kawamura method by eliminating the water extraction and evaporation steps. Aerosol materials were directly mixed with the BF 3/BuOH derivatization agent and the extracting solvent hexane. This modification improves recoveries for both the more volatile and the less water-soluble compounds. This improved method was applied to study the abundances and sources of these oxygenated compounds in PM2.5 aerosol samples collected in Hong Kong under different synoptic conditions during 2003-2005. These compounds account for on average 5.2% of OC (range: 1.4%-13.6%) on a carbon basis. Oxalic acid was the most abundant species. Six C2 and C3 oxygenated compounds, namely oxalic, malonic, glyoxylic, pyruvic acids, glyoxal, and methylglyoxal, dominated this suite of oxygenated compounds. More efforts are therefore suggested to focus on these small compounds in understanding the role of oxygenated

  11. Characterization and application of a laser-driven intense pulsed neutron source using Trident

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Sven C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    A team of Los Alamos researchers supported a final campaign to use the Trident laser to produce neutrons, contributed their multidisciplinary expertise to experimentally assess if laser-driven neutron sources can be useful for MaRIE. MaRIE is the Laboratory’s proposed experimental facility for the study of matter-radiation interactions in extremes. Neutrons provide a radiographic probe that is complementary to x-rays and protons, and can address imaging challenges not amenable to those beams. The team's efforts characterize the Laboratory’s responsiveness, flexibility, and ability to apply diverse expertise where needed to perform successful complex experiments.

  12. The Classroom Observation Protocol for Undergraduate STEM (COPUS): A New Instrument to Characterize University STEM Classroom Practices

    OpenAIRE

    Smith, Michelle K.; Jones, Francis H. M.; Gilbert, Sarah L.; Wieman, Carl E.

    2013-01-01

    Instructors and the teaching practices they employ play a critical role in improving student learning in college science, technology, engineering, and mathematics (STEM) courses. Consequently, there is increasing interest in collecting information on the range and frequency of teaching practices at department-wide and institution-wide scales. To help facilitate this process, we present a new classroom observation protocol known as the Classroom Observation Protocol for Undergraduate STEM or C...

  13. Prevalence and methodologies for detection, characterization and subtyping of Listeria monocytogenes and L. ivanovii in foods and environmental sources

    Directory of Open Access Journals (Sweden)

    Jin-Qiang Chen

    2017-09-01

    Full Text Available Listeria monocytogenes, one of the most important foodborne pathogens, can cause listeriosis, a lethal disease for humans. L. ivanovii, which is closely related to L. monocytogenes, is also widely distributed in nature and infects mainly warm-blooded ruminants, causing economic loss. Thus, there are high priority needs for methodologies for rapid, specific, cost-effective and accurate detection, characterization and subtyping of L. monocytogenes and L. ivanovii in foods and environmental sources. In this review, we (A described L. monocytogenes and L. ivanovii, world-wide incidence of listeriosis, and prevalence of various L. monocytogenes strains in food and environmental sources; (B comprehensively reviewed different types of traditional and newly developed methodologies, including culture-based, antigen/antibody-based, LOOP-mediated isothermal amplification, matrix-assisted laser desorption ionization-time of flight-mass spectrometry, DNA microarray, and genomic sequencing for detection and characterization of L. monocytogenes in foods and environmental sources; (C comprehensively summarized different subtyping methodologies, including pulsed-field gel electrophoresis, multi-locus sequence typing, ribotyping, and phage-typing, and whole genomic sequencing etc. for subtyping of L. monocytogenes strains from food and environmental sources; and (D described the applications of these methodologies in detection and subtyping of L. monocytogenes in foods and food processing facilities.

  14. Characterization of noise sources in nuclear power reactors

    International Nuclear Information System (INIS)

    Andhill, Gustav

    2004-03-01

    Algorithms for unfolding noise sources in nuclear power reactors are investigated. No preliminary knowledge of the functional form of the space dependence is assumed in contrast to the usual methods. The advantage of this is that the algorithms can be applied to various noise sources and the results can be interpreted without expert knowledge. The results can therefore be directly displayed to the plant operators. The precision will however be lower than that of the traditional methods because of the arbitrariness in the type of the noise source. Two different reactor models are studied. First a simple one-dimensional and homogeneous core is considered. Three methods for finding the noise source from the measured flux noise are investigated here. The first one is based on the inversion of an appropriate pre-calculated noise source-to-measured induced neutron noise transfer function. The second one relies on the use of the measured neutron noise as the solution of the equations giving the neutron noise induced by a given noise source. The advantage of this second method is that the noise source can be determined directly, i.e., without any Inversion of a transfer function. The second method is thus called the direct method. The last method is based on a reconstruction of the noise source by spatial Fourier expansion. The two latter techniques are found usable for different locations of the actual noise source in the 1D core. They are therefore tried on more sophisticated two-dimensional models of cores. The direct method is able both to determine the nature of the noise source and its location in 2D

  15. Characterization of noise sources in nuclear power reactors

    Energy Technology Data Exchange (ETDEWEB)

    Andhill, Gustav

    2004-03-01

    Algorithms for unfolding noise sources in nuclear power reactors are investigated. No preliminary knowledge of the functional form of the space dependence is assumed in contrast to the usual methods. The advantage of this is that the algorithms can be applied to various noise sources and the results can be interpreted without expert knowledge. The results can therefore be directly displayed to the plant operators. The precision will however be lower than that of the traditional methods because of the arbitrariness in the type of the noise source. Two different reactor models are studied. First a simple one-dimensional and homogeneous core is considered. Three methods for finding the noise source from the measured flux noise are investigated here. The first one is based on the inversion of an appropriate pre-calculated noise source-to-measured induced neutron noise transfer function. The second one relies on the use of the measured neutron noise as the solution of the equations giving the neutron noise induced by a given noise source. The advantage of this second method is that the noise source can be determined directly, i.e., without any Inversion of a transfer function. The second method is thus called the direct method. The last method is based on a reconstruction of the noise source by spatial Fourier expansion. The two latter techniques are found usable for different locations of the actual noise source in the 1D core. They are therefore tried on more sophisticated two-dimensional models of cores. The direct method is able both to determine the nature of the noise source and its location in 2D.

  16. Stochastic Characterization of Communication Network Latency for Wide Area Grid Control Applications.

    Energy Technology Data Exchange (ETDEWEB)

    Ameme, Dan Selorm Kwami [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guttromson, Ross [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-10-01

    This report characterizes communications network latency under various network topologies and qualities of service (QoS). The characterizations are probabilistic in nature, allowing deeper analysis of stability for Internet Protocol (IP) based feedback control systems used in grid applications. The work involves the use of Raspberry Pi computers as a proxy for a controlled resource, and an ns-3 network simulator on a Linux server to create an experimental platform (testbed) that can be used to model wide-area grid control network communications in smart grid. Modbus protocol is used for information transport, and Routing Information Protocol is used for dynamic route selection within the simulated network.

  17. Assessment of Laser-Driven Pulsed Neutron Sources for Poolside Neutron-based Advanced NDE – A Pathway to LANSCE-like Characterization at INL

    Energy Technology Data Exchange (ETDEWEB)

    Roth, Markus [Technische Univ. Darmstadt (Germany); Vogel, Sven C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bourke, Mark Andrew M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fernandez, Juan Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mocko, Michael Jeffrey [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Glenzer, Siegfried [Stanford Univ., CA (United States); Leemans, Wim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Siders, Craig [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Haefner, Constantin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-04-19

    A variety of opportunities for characterization of fresh nuclear fuels using thermal (~25meV) and epithermal (~10eV) neutrons have been documented at Los Alamos National Laboratory. They include spatially resolved non-destructive characterization of features, isotopic enrichment, chemical heterogeneity and stoichiometry. The LANSCE spallation neutron source is well suited in neutron fluence and temporal characteristics for studies of fuels. However, recent advances in high power short pulse lasers suggest that compact neutron sources might, over the next decade, become viable at a price point that would permit their consideration for poolside characterization on site at irradiation facilities. In a laser-driven neutron source the laser is used to accelerate deuterium ions into a beryllium target where neutrons are produced. At this time, the technology is new and their total neutron production is approximately four orders of magnitude less than a facility like LANSCE. However, recent measurements on a sub-optimized system demonstrated >1010 neutrons in sub-nanosecond pulses in predominantly forward direction. The compactness of the target system compared to a spallation target may allow exchanging the target during a measurement to e.g. characterize a highly radioactive sample with thermal, epithermal, and fast neutrons as well as hard X-rays, thus avoiding sample handling. At this time several groups are working on laser-driven neutron production and are advancing concepts for lasers, laser targets, and optimized neutron target/moderator systems. Advances in performance sufficient to enable poolside fuels characterization with LANSCE-like fluence on sample within a decade may be possible. This report describes the underlying physics and state-of-the-art of the laser-driven neutron production process from the perspective of the DOE/NE mission. It also discusses the development and understanding that will be necessary to provide customized capability for

  18. Free-space measurement-device-independent quantum-key-distribution protocol using decoy states with orbital angular momentum

    International Nuclear Information System (INIS)

    Wang Le; Zhao Sheng-Mei; Cheng Wei-Wen; Gong Long-Yan

    2015-01-01

    In this paper, we propose a measurement-device-independent quantum-key-distribution (MDI-QKD) protocol using orbital angular momentum (OAM) in free space links, named the OAM-MDI-QKD protocol. In the proposed protocol, the OAM states of photons, instead of polarization states, are used as the information carriers to avoid the reference frame alignment, the decoy-state is adopted to overcome the security loophole caused by the weak coherent pulse source, and the high efficient OAM-sorter is adopted as the measurement tool for Charlie to obtain the output OAM state. Here, Charlie may be an untrusted third party. The results show that the authorized users, Alice and Bob, could distill a secret key with Charlie’s successful measurements, and the key generation performance is slightly better than that of the polarization-based MDI-QKD protocol in the two-dimensional OAM cases. Simultaneously, Alice and Bob can reduce the number of flipping the bits in the secure key distillation. It is indicated that a higher key generation rate performance could be obtained by a high dimensional OAM-MDI-QKD protocol because of the unlimited degree of freedom on OAM states. Moreover, the results show that the key generation rate and the transmission distance will decrease as the growth of the strength of atmospheric turbulence (AT) and the link attenuation. In addition, the decoy states used in the proposed protocol can get a considerable good performance without the need for an ideal source. (paper)

  19. Ocean fertilization, carbon credits and the Kyoto Protocol

    Science.gov (United States)

    Westley, M. B.; Gnanadesikan, A.

    2008-12-01

    Commercial interest in ocean fertilization as a carbon sequestration tool was excited by the December 1997 agreement of the Kyoto Protocol to the United Nations Convention on Climate Change. The Protocol commits industrialized countries to caps on net greenhouse gas emissions and allows for various flexible mechanisms to achieve these caps in the most economically efficient manner possible, including trade in carbon credits from projects that reduce emissions or enhance sinks. The carbon market was valued at 64 billion in 2007, with the bulk of the trading (50 billion) taking place in the highly regulated European Union Emission Trading Scheme, which deals primarily in emission allowances in the energy sector. A much smaller amount, worth $265 million, was traded in the largely unregulated "voluntary" market (Capoor and Ambrosi 2008). As the voluntary market grows, so do calls for its regulation, with several efforts underway to set rules and standards for the sale of voluntary carbon credits using the Kyoto Protocol as a starting point. Four US-based companies and an Australian company currently seek to develop ocean fertilization technologies for the generation of carbon credits. We review these plans through the lens of the Kyoto Protocol and its flexible mechanisms, and examine whether and how ocean fertilization could generate tradable carbon credits. We note that at present, ocean sinks are not included in the Kyoto Protocol, and that furthermore, the Kyoto Protocol only addresses sources and sinks of greenhouse gases within national boundaries, making open-ocean fertilization projects a jurisdictional challenge. We discuss the negotiating history behind the limited inclusion of land use, land use change and forestry in the Kyoto Protocol and the controversy and eventual compromise concerning methodologies for terrestrial carbon accounting. We conclude that current technologies for measuring and monitoring carbon sequestration following ocean fertilization

  20. Human endothelial colony-forming cells expanded with an improved protocol are a useful endothelial cell source for scaffold-based tissue engineering.

    Science.gov (United States)

    Denecke, Bernd; Horsch, Liska D; Radtke, Stefan; Fischer, Johannes C; Horn, Peter A; Giebel, Bernd

    2015-11-01

    One of the major challenges in tissue engineering is to supply larger three-dimensional (3D) bioengineered tissue transplants with sufficient amounts of nutrients and oxygen and to allow metabolite removal. Consequently, artificial vascularization strategies of such transplants are desired. One strategy focuses on endothelial cells capable of initiating new vessel formation, which are settled on scaffolds commonly used in tissue engineering. A bottleneck in this strategy is to obtain sufficient amounts of endothelial cells, as they can be harvested only in small quantities directly from human tissues. Thus, protocols are required to expand appropriate cells in sufficient amounts without interfering with their capability to settle on scaffold materials and to initiate vessel formation. Here, we analysed whether umbilical cord blood (CB)-derived endothelial colony-forming cells (ECFCs) fulfil these requirements. In a first set of experiments, we showed that marginally expanded ECFCs settle and survive on different scaffold biomaterials. Next, we improved ECFC culture conditions and developed a protocol for ECFC expansion compatible with 'Good Manufacturing Practice' (GMP) standards. We replaced animal sera with human platelet lysates and used a novel type of tissue-culture ware. ECFCs cultured under the new conditions revealed significantly lower apoptosis and increased proliferation rates. Simultaneously, their viability was increased. Since extensively expanded ECFCs could still settle on scaffold biomaterials and were able to form tubular structures in Matrigel assays, we conclude that these ex vivo-expanded ECFCs are a novel, very potent cell source for scaffold-based tissue engineering. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Pulmonary CT angiography protocol adapted to the hemodynamic effects of pregnancy.

    LENUS (Irish Health Repository)

    Ridge, Carole A

    2012-02-01

    OBJECTIVE: The purpose of this study was to compare the image quality of a standard pulmonary CT angiography (CTA) protocol with a pulmonary CTA protocol optimized for use in pregnant patients with suspected pulmonary embolism (PE). MATERIALS AND METHODS: Forty-five consecutive pregnant patients with suspected PE were retrospectively included in the study: 25 patients (group A) underwent standard-protocol pulmonary CTA and 20 patients (group B) were imaged using a protocol modified for pregnancy. The modified protocol used a shallow inspiration breath-hold and a high concentration, high rate of injection, and high volume of contrast material. Objective image quality and subjective image quality were evaluated by measuring pulmonary arterial enhancement, determining whether there was transient interruption of the contrast bolus by unopacified blood from the inferior vena cava (IVC), and assessing diagnostic adequacy. RESULTS: Objective and subjective image quality were significantly better for group B-that is, for the group who underwent the CTA protocol optimized for pregnancy. Mean pulmonary arterial enhancement and the percentage of studies characterized as adequate for diagnosis were higher in group B than in group A: 321 +\\/- 148 HU (SD) versus 178 +\\/- 67 HU (p = 0.0001) and 90% versus 64% (p = 0.05), respectively. Transient interruption of contrast material by unopacified blood from the IVC was observed more frequently in group A (39%) than in group B (10%) (p = 0.05). CONCLUSION: A pulmonary CTA protocol optimized for pregnancy significantly improved image quality by increasing pulmonary arterial opacification, improving diagnostic adequacy, and decreasing transient interruption of the contrast bolus by unopacified blood from the IVC.

  2. Implications for global energy markets: implications for non-fossil energy sources

    International Nuclear Information System (INIS)

    Grubb, Michael

    1998-01-01

    This paper highlights the recent developments concerning non-fossil energy and examines the impact of the Kyoto Protocol on non-fossil energy sources, and the implications for non-fossil sources in the implementation of the Kyoto Protocol. The current contributions of fossil and non-fossil fuels to electricity production, prospects for expansion of the established non-fossil sources, new renewables in Europe to date, renewables in Europe to 2010, and policy integration in the EU are discussed. Charts illustrating the generating capacity of renewable energy plant in Britain (1992-1966), wind energy capacity in Europe (1990-2000), and projected renewable energy contributions in the EU (wind, small hydro, photovoltaic, biomass and geothermal) are provided. (UK)

  3. Increased Arctic Deposition of Persistent Compounds as a Result of the Montreal Protocol

    Science.gov (United States)

    Young, C.; Pickard, H. M.; De Silva, A. O.; Spencer, C.; Criscitiello, A. S.; Muir, D.; Sharp, M. J.

    2017-12-01

    Perfluorocarboxylic acids (PFCAs) are among the diverse groups of compounds characterized as persistent organic pollutants. They are toxic, resistant to environmental degradation, and adversely impact human and environmental health. PFCAs with four or fewer carbons, short-chain PFCAs (scPFCAs), are of particular interest because of their increasing levels in the environment, toxicity to plants, and potential for accumulation in some aquatic ecosystems, making them an emerging environmental concern. A minor source of scPFCAs to the Arctic has been shown to be atmospheric transformation of fluoropolymer precursors, followed by deposition. Additional potential sources of scPFCAs to the Arctic are chlorofluorocarbon (CFC)-replacement compounds. Through analysis of an ice core from the Canadian High Arctic, we show that Montreal Protocol-mandated introduction of CFC-replacement compounds for the heat-transfer industry has led to increasing inputs of these scPFCAs to the remote environment. Flux measurements for scPFCAs as a class of contaminants have only been reported in a couple studies to date. Here, we provide the first multi-decadal temporal record of scPFCA deposition, demonstrating a dramatic increase in deposition resulting from emission of CFC-replacements. These results bring to the forefront a need for a holistic approach to environmental risk assessment that considers impacts of replacement substances and degradation products.

  4. Biplane interventional pediatric system with cone‐beam CT: dose and image quality characterization for the default protocols

    Science.gov (United States)

    Vañó, Eliseo; Alejo, Luis; Ubeda, Carlos; Gutiérrez‐Larraya, Federico; Garayoa, Julia

    2016-01-01

    The aim of this study was to assess image quality and radiation dose of a biplane angiographic system with cone‐beam CT (CBCT) capability tuned for pediatric cardiac procedures. The results of this study can be used to explore dose reduction techniques. For pulsed fluoroscopy and cine modes, polymethyl methacrylate phantoms of various thicknesses and a Leeds TOR 18‐FG test object were employed. Various fields of view (FOV) were selected. For CBCT, the study employed head and body dose phantoms, Catphan 504, and an anthropomorphic cardiology phantom. The study also compared two 3D rotational angiography protocols. The entrance surface air kerma per frame increases by a factor of 3–12 when comparing cine and fluoroscopy frames. The biggest difference in the signal‐to‐noise ratio between fluoroscopy and cine modes occurs at FOV 32 cm because fluoroscopy is acquired at a 1440×1440 pixel matrix size and in unbinned mode, whereas cine is acquired at 720×720 pixels and in binned mode. The high‐contrast spatial resolution of cine is better than that of fluoroscopy, except for FOV 32 cm, because fluoroscopy mode with 32 cm FOV is unbinned. Acquiring CBCT series with a 16 cm head phantom using the standard dose protocol results in a threefold dose increase compared with the low‐dose protocol. Although the amount of noise present in the images acquired with the low‐dose protocol is much higher than that obtained with the standard mode, the images present better spatial resolution. A 1 mm diameter rod with 250 Hounsfield units can be distinguished in reconstructed images with an 8 mm slice width. Pediatric‐specific protocols provide lower doses while maintaining sufficient image quality. The system offers a novel 3D imaging mode. The acquisition of CBCT images results in increased doses administered to the patients, but also provides further diagnostic information contained in the volumetric images. The assessed CBCT protocols provide images that are noisy

  5. Dosimetric characterization of two radium sources for retrospective dosimetry studies

    Energy Technology Data Exchange (ETDEWEB)

    Candela-Juan, C., E-mail: ccanjuan@gmail.com [Radiation Oncology Department, La Fe University and Polytechnic Hospital, Valencia 46026, Spain and Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Karlsson, M. [Division of Radiological Sciences, Department of Medical and Health Sciences, Linköping University, Linköping SE 581 85 (Sweden); Lundell, M. [Department of Medical Physics and Oncology, Karolinska University Hospital and Karolinska Institute, Stockholm SE 171 76 (Sweden); Ballester, F. [Department of Atomic, Molecular and Nuclear Physics, University of Valencia, Burjassot 46100 (Spain); Tedgren, Å. Carlsson [Division of Radiological Sciences, Department of Medical and Health Sciences, Linköping University, Linköping SE 581 85, Sweden and Swedish Radiation Safety Authority, Stockholm SE 171 16 (Sweden)

    2015-05-15

    Purpose: During the first part of the 20th century, {sup 226}Ra was the most used radionuclide for brachytherapy. Retrospective accurate dosimetry, coupled with patient follow up, is important for advancing knowledge on long-term radiation effects. The purpose of this work was to dosimetrically characterize two {sup 226}Ra sources, commonly used in Sweden during the first half of the 20th century, for retrospective dose–effect studies. Methods: An 8 mg {sup 226}Ra tube and a 10 mg {sup 226}Ra needle, used at Radiumhemmet (Karolinska University Hospital, Stockholm, Sweden), from 1925 to the 1960s, were modeled in two independent Monte Carlo (MC) radiation transport codes: GEANT4 and MCNP5. Absorbed dose and collision kerma around the two sources were obtained, from which the TG-43 parameters were derived for the secular equilibrium state. Furthermore, results from this dosimetric formalism were compared with results from a MC simulation with a superficial mould constituted by five needles inside a glass casing, placed over a water phantom, trying to mimic a typical clinical setup. Calculated absorbed doses using the TG-43 formalism were also compared with previously reported measurements and calculations based on the Sievert integral. Finally, the dose rate at large distances from a {sup 226}Ra point-like-source placed in the center of 1 m radius water sphere was calculated with GEANT4. Results: TG-43 parameters [including g{sub L}(r), F(r, θ), Λ, and s{sub K}] have been uploaded in spreadsheets as additional material, and the fitting parameters of a mathematical curve that provides the dose rate between 10 and 60 cm from the source have been provided. Results from TG-43 formalism are consistent within the treatment volume with those of a MC simulation of a typical clinical scenario. Comparisons with reported measurements made with thermoluminescent dosimeters show differences up to 13% along the transverse axis of the radium needle. It has been estimated that

  6. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  7. Characterization of a pulsed x-ray source for fluorescent lifetime measurements

    International Nuclear Information System (INIS)

    Blankespoor, S.C.; Derenzo, S.E.; Moses, W.W.; Rossington, C.S.; Ito, M.; Oba, K.

    1994-01-01

    To search for new, fast, inorganic scintillators, the authors have developed a bench-top pulsed x-ray source for determining fluorescent lifetimes and wavelengths of compounds in crystal or powdered form. This source uses a light-excited x-ray tube which produces x-rays when light from a laser diode strikes its photocathode. The x-ray tube has a tungsten anode, a beryllium exit window, a 30 kV maximum tube bias, and a 50 μA maximum average cathode current. The laser produces 3 x 10 7 photons at 650 nm per ∼100 ps pulse, with up to 10 7 pulses/sec. The time spread for the laser diode, x-ray tube, and a microchannel plate photomultiplier tube is less than 120 ps fwhm. The mean x-ray energy at tube biases of 20, 25, and 30 kV is 9.4, 10.3, and 11.1 keV, respectively. The authors measured 140, 230, and 330 x-ray photons per laser diode pulse per steradian, at tube biases of 20, 25, and 30 kV, respectively. Background x-rays due to dark current occur at a rate of 1 x 10 6 and 3 x 10 6 photons/sec/steradian at biases of 25 and 30 kV, respectively. Data characterizing the x-ray output with an aluminum filter in the x-ray beam are also presented

  8. Multi-slice and dual-source CT in cardiac imaging. Principles - protocols - indications - outlook. 2. ed.

    International Nuclear Information System (INIS)

    Ohnesorge, B.M.; Flohr, T.G.; Becker, C.R.; Reiser, M.F.; Knez, A

    2007-01-01

    Cardiac diseases, and in particular coronary artery disease, are the leading cause of death and morbidity in industrialized countries. The development of non-invasive imaging techniques for the heart and the coronary arteries has been considered a key element in improving patient care. A breakthrough in cardiac imaging using CT occurred in 1998, with the introduction of multi-slice computed tomography (CT). Since then, amazing advances in performance have taken place with scanners that acquire up to 64 slices per rotation. This book discusses the state-of-the-art developments in multi-slice CT for cardiac imaging as well as those that can be anticipated in the future. It serves as a comprehensive work that covers all aspects of this technology, from the technical fundamentals and image evaluation all the way to clinical indications and protocol recommendations. This fully reworked second edition draws on the most recent clinical experience obtained with 16- and 64-slice CT scanners by world-leading experts from Europe and the United States. It also includes ''hands-on'' experience in the form of 10 representative clinical case studies, which are included on the accompanying CD. As a further highlight, the latest results of the very recently introduced dual-source CT, which may soon represent the CT technology of choice for cardiac applications, are presented. This book will not only convince the reader that multi-slice cardiac CT has arrived in clinical practice, it will also make a significant contribution to the education of radiologists, cardiologists, technologists, and physicists-whether newcomers, experienced users, or researchers. (orig.)

  9. Experimental determination of dosimetric characterization of a newly designed encapsulated interstitial brachytherapy source of 103Pd-model Pd-1

    International Nuclear Information System (INIS)

    Nath, Ravinder; Yue Ning; Roa, Eduardo

    2002-01-01

    A newly designed encapsulated 103 Pd source has been introduced (BrachySeed trade mark sign -Pd-103, also named Model Pd-1, manufactured by DRAXIMAGE Inc. and distributed by Cytogen Corp.) for interstitial brachytherapy to provide more isotropic dose distributions. In this work, the dosimetric characteristics of the 103 Pd source were measured with micro LiF TLD chips and dosimetry parameters were characterized based upon the American Association of Physicists in Medicine (AAPM) Task Group No. 43 formalism. The dose rate constant of the sources was determined to be 0.66±0.05 cGy h-1 U-1. The radial dose function was measured and was found to be similar to that of the Theragenics Model 200 103 Pd source. The anisotropy constant for the Model Pd-1 source was determined to be 1.03

  10. Active control on high-order coherence and statistic characterization on random phase fluctuation of two classical point sources.

    Science.gov (United States)

    Hong, Peilong; Li, Liming; Liu, Jianji; Zhang, Guoquan

    2016-03-29

    Young's double-slit or two-beam interference is of fundamental importance to understand various interference effects, in which the stationary phase difference between two beams plays the key role in the first-order coherence. Different from the case of first-order coherence, in the high-order optical coherence the statistic behavior of the optical phase will play the key role. In this article, by employing a fundamental interfering configuration with two classical point sources, we showed that the high- order optical coherence between two classical point sources can be actively designed by controlling the statistic behavior of the relative phase difference between two point sources. Synchronous position Nth-order subwavelength interference with an effective wavelength of λ/M was demonstrated, in which λ is the wavelength of point sources and M is an integer not larger than N. Interestingly, we found that the synchronous position Nth-order interference fringe fingerprints the statistic trace of random phase fluctuation of two classical point sources, therefore, it provides an effective way to characterize the statistic properties of phase fluctuation for incoherent light sources.

  11. Psychometric Properties of a Standardized Observation Protocol to Quantify Pediatric Physical Therapy Actions

    NARCIS (Netherlands)

    Sonderer, Patrizia; Ziegler, Schirin Akhbari; Oertle, Barbara Gressbach; Meichtry, Andre; Hadders-Algra, Mijna

    Purpose: Pediatric physical therapy (PPT) is characterized by heterogeneity. This blurs the evaluation of effective components of PPT. The Groningen Observation Protocol (GOP) was developed to quantify contents of PPT. This study assesses the reliability and completeness of the GOP. Methods: Sixty

  12. Excimer laser: a module of the alopecia areata common protocol.

    Science.gov (United States)

    McMichael, Amy J

    2013-12-01

    Alopecia areata (AA) is an autoimmune condition characterized by T cell-mediated attack of the hair follicle. The inciting antigenic stimulus is unknown. A dense perbulbar lymphocytic infiltrate and reproducible immunologic abnormalities are hallmark features of the condition. The cellular infiltrate primarily consists of activated T lymphocytes and antigen-presenting Langerhans cells. The xenon chloride excimer laser emits its total energy at the wavelength of 308 nm and therefore is regarded as a "super-narrowband" UVB light source. Excimer laser treatment is highly effective in psoriasis, another T cell-mediated disorder that shares many immunologic features with AA. The excimer laser is superior in inducing T cell apoptosis in vitro compared with narrowband UVB, with paralleled improved clinical efficacy. The excimer laser has been used successfully in patients with AA. In this context, evaluation of the potential benefit of 308-nm excimer laser therapy in the treatment of AA is clinically warranted. Herein, the use of a common treatment protocol with a specifically designed module to study the outcome of excimer laser treatment on moderate-to-severe scalp AA in adults is described.

  13. Software Modules for the Proximity-1 Space Link Interleaved Time Synchronization (PITS) Protocol

    Science.gov (United States)

    Woo, Simon S.; Veregge, John R.; Gao, Jay L.; Clare, Loren P.; Mills, David

    2012-01-01

    The Proximity-1 Space Link Interleaved Time Synchronization (PITS) protocol provides time distribution and synchronization services for space systems. A software prototype implementation of the PITS algorithm has been developed that also provides the test harness to evaluate the key functionalities of PITS with simulated data source and sink. PITS integrates time synchronization functionality into the link layer of the CCSDS Proximity-1 Space Link Protocol. The software prototype implements the network packet format, data structures, and transmit- and receive-timestamp function for a time server and a client. The software also simulates the transmit and receive-time stamp exchanges via UDP (User Datagram Protocol) socket between a time server and a time client, and produces relative time offsets and delay estimates.

  14. A Comparison Between Inter-Asterisk eXchange Protocol and Jingle Protocol: Session Time

    Directory of Open Access Journals (Sweden)

    H. S. Haj Aliwi

    2016-08-01

    Full Text Available Over the last few years, many multimedia conferencing and Voice over Internet Protocol (VoIP applications have been developed due to the use of signaling protocols in providing video, audio and text chatting services between at least two participants. This paper compares between two widely common signaling protocols: InterAsterisk eXchange Protocol (IAX and the extension of the eXtensible Messaging and Presence Protocol (Jingle in terms of delay time during call setup, call teardown, and media sessions.

  15. Direct data access protocols benchmarking on DPM

    CERN Document Server

    Furano, Fabrizio; Keeble, Oliver; Mancinelli, Valentina

    2015-01-01

    The Disk Pool Manager is an example of a multi-protocol, multi-VO system for data access on the Grid that went though a considerable technical evolution in the last years. Among other features, its architecture offers the opportunity of testing its different data access frontends under exactly the same conditions, including hardware and backend software. This characteristic inspired the idea of collecting monitoring information from various testbeds in order to benchmark the behaviour of the HTTP and Xrootd protocols for the use case of data analysis, batch or interactive. A source of information is the set of continuous tests that are run towards the worldwide endpoints belonging to the DPM Collaboration, which accumulated relevant statistics in its first year of activity. On top of that, the DPM releases are based on multiple levels of automated testing that include performance benchmarks of various kinds, executed regularly every day. At the same time, the recent releases of DPM can report monitoring infor...

  16. Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool

    Science.gov (United States)

    Marco Figuera, R.; Pham Huu, B.; Rossi, A. P.; Minin, M.; Flahaut, J.; Halder, A.

    2018-01-01

    The lack of open-source tools for hyperspectral data visualization and analysis creates a demand for new tools. In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python Application Programming Interface (API) capable of visualizing and analyzing a wide variety of hyperspectral data from different planetary bodies. Current WebGIS open-source tools are evaluated in order to give an overview and contextualize how PlanetServer can help in this matters. The web client is thoroughly described as well as the datasets available in PlanetServer. Also, the Python API is described and exposed the reason of its development. Two different examples of mineral characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars are presented. As the obtained results show positive outcome in hyperspectral analysis and visualization compared to previous literature, we suggest using the PlanetServer approach for such investigations.

  17. A Novel Inspection Protocol to Detect Volatile Compounds in Breast Surgery Electrocautery Smoke

    Directory of Open Access Journals (Sweden)

    Yu-Wen Lin

    2010-07-01

    Conclusion: The sampling protocol enabled acquisition of smoke samples near the source without interrupting surgery. The findings suggest that type of surgery, patient body mass index and duration of electrocautery are factors that can alter production of chemicals.

  18. Antioxidants: Characterization, natural sources, extraction and analysis

    OpenAIRE

    OROIAN, MIRCEA; Escriche Roberto, Mª Isabel

    2015-01-01

    [EN] Recently many review papers regarding antioxidants fromdifferent sources and different extraction and quantification procedures have been published. However none of them has all the information regarding antioxidants (chemistry, sources, extraction and quantification). This article tries to take a different perspective on antioxidants for the new researcher involved in this field. Antioxidants from fruit, vegetables and beverages play an important role in human health, fo...

  19. The Virtual Insect Brain protocol: creating and comparing standardized neuroanatomy

    Science.gov (United States)

    Jenett, Arnim; Schindelin, Johannes E; Heisenberg, Martin

    2006-01-01

    Background In the fly Drosophila melanogaster, new genetic, physiological, molecular and behavioral techniques for the functional analysis of the brain are rapidly accumulating. These diverse investigations on the function of the insect brain use gene expression patterns that can be visualized and provide the means for manipulating groups of neurons as a common ground. To take advantage of these patterns one needs to know their typical anatomy. Results This paper describes the Virtual Insect Brain (VIB) protocol, a script suite for the quantitative assessment, comparison, and presentation of neuroanatomical data. It is based on the 3D-reconstruction and visualization software Amira, version 3.x (Mercury Inc.) [1]. Besides its backbone, a standardization procedure which aligns individual 3D images (series of virtual sections obtained by confocal microscopy) to a common coordinate system and computes average intensities for each voxel (volume pixel) the VIB protocol provides an elaborate data management system for data administration. The VIB protocol facilitates direct comparison of gene expression patterns and describes their interindividual variability. It provides volumetry of brain regions and helps to characterize the phenotypes of brain structure mutants. Using the VIB protocol does not require any programming skills since all operations are carried out at an intuitively usable graphical user interface. Although the VIB protocol has been developed for the standardization of Drosophila neuroanatomy, the program structure can be used for the standardization of other 3D structures as well. Conclusion Standardizing brains and gene expression patterns is a new approach to biological shape and its variability. The VIB protocol provides a first set of tools supporting this endeavor in Drosophila. The script suite is freely available at [2] PMID:17196102

  20. Community Assessment of Natural Food Sources of Vitamin A ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Community Assessment of Natural Food Sources of Vitamin A: Guidelines for an Ethnographic Protocol. Book cover Community Assessment of Natural Food Sources of Vitamin A: Guidelines for an Ethnographic. Auteur(s) : L. Blum, P.J. Pelto, G.H. Pelto, and H.V. Kuhnlein. Maison(s) d'édition : INFDC, IDRC. 1 janvier 1997.

  1. Community Assessment of Natural Food Sources of Vitamin A ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    1 janv. 1997 ... Community Assessment of Natural Food Sources of Vitamin A : Guidelines for an Ethnographic Protocol. Couverture du livre Community Assessment of Natural Food Sources of Vitamin A : Guidelines for an. Auteur(s):. L. Blum, P.J. Pelto, G.H. Pelto et H.V. Kuhnlein. Maison(s) d'édition: INFDC, CRDI.

  2. Generating regionalized neuronal cells from pluripotency, a step-by-step protocol

    Directory of Open Access Journals (Sweden)

    Agnete eKirkeby

    2013-01-01

    Full Text Available Human pluripotent stem cells possess the potential to generate cells for regenerative therapies in patients with neurodegenerative diseases, and constitute an excellent cell source for studying human neural development and disease modeling. Protocols for neural differentiation of human pluripotent stem cells have undergone significant progress during recent years, allowing for rapid and synchronized neural conversion. Differentiation procedures can further be combined with accurate and efficient positional patterning to yield regionalized neural progenitors and subtype-specific neurons corresponding to different parts of the developing human brain. Here, we present a step-by-step protocol for neuralization and regionalization of human pluripotent cells for transplantation studies or in vitro analysis.

  3. Privacy-Preserving Data Aggregation Protocols for Wireless Sensor Networks: A Survey

    Directory of Open Access Journals (Sweden)

    Rabindra Bista

    2010-05-01

    Full Text Available Many wireless sensor network (WSN applications require privacy-preserving aggregation of sensor data during transmission from the source nodes to the sink node. In this paper, we explore several existing privacy-preserving data aggregation (PPDA protocols for WSNs in order to provide some insights on their current status. For this, we evaluate the PPDA protocols on the basis of such metrics as communication and computation costs in order to demonstrate their potential for supporting privacy-preserving data aggregation in WSNs. In addition, based on the existing research, we enumerate some important future research directions in the field of privacy-preserving data aggregation for WSNs.

  4. Privacy-preserving data aggregation protocols for wireless sensor networks: a survey.

    Science.gov (United States)

    Bista, Rabindra; Chang, Jae-Woo

    2010-01-01

    Many wireless sensor network (WSN) applications require privacy-preserving aggregation of sensor data during transmission from the source nodes to the sink node. In this paper, we explore several existing privacy-preserving data aggregation (PPDA) protocols for WSNs in order to provide some insights on their current status. For this, we evaluate the PPDA protocols on the basis of such metrics as communication and computation costs in order to demonstrate their potential for supporting privacy-preserving data aggregation in WSNs. In addition, based on the existing research, we enumerate some important future research directions in the field of privacy-preserving data aggregation for WSNs.

  5. Hydrofluorocarbon (HFC) Scenarios, Climate Effects and the Montreal Protocol

    Science.gov (United States)

    Velders, G. J. M.; Fahey, D. W.; Daniel, J. S.

    2016-12-01

    The Montreal Protocol has reduced the use of ozone-depleting substances by more than 95% from its peak levels in the 1980s. As a direct result the use of hydrofluorocarbons (HFCs) as substitute compounds has increased significantly. National regulations to limit HFC use have been adopted recently in the European Union, Japan and USA, and four proposals have been submitted to amend the Montreal Protocol to substantially reduce growth in HFC use. The Parties of the Montreal Protocol have discussed these proposals during their meetings in 2016. The effects of the national regulations and Montreal Protocol amendment proposals on climate forcings and surface temperatures will be presented. Global scenarios of HFC emissions reach 4.0-5.3 GtCO2-eq yr-1 in 2050, which corresponds to a projected growth from 2015 to 2050 which is 9% to 29% of that for CO2 over the same time period. In 2050, in percent of global HFC emissions, China ( 30%), India and the rest of Asia ( 25%), Middle East and northern Africa ( 10%), and USA ( 10%) are the principal source regions; and refrigeration and stationary air conditioning are the major use sectors. Calculated baseline emissions are reduced by 90% in 2050 by implementing the North America Montreal Protocol amendment proposal. This corresponds to a reduction in surface temperature attributed to HFCs from 0.1 oC to 0.04 oC in 2050 and from 0.3-0.4 oC to 0.02 oC in 2100.

  6. Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Conover, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Crawford, Alasdair J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fuller, Jason [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gourisetti, Sri Nikhil [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Viswanathan, Vilayanur [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ferreira, Summer Rhodes [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schoenwald, David A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-04-01

    This Protocol provides a set of “best practices” for characterizing energy storage systems (ESSs) and measuring and reporting their performance. It serves as a basis for assessing how an ESS will perform with respect to key performance attributes relevant to different applications. It is intended to provide a valid and accurate basis for the comparison of different ESSs. By achieving the stated purpose, the Protocol will enable more informed decision-making in the selection of ESSs for various stationary applications. The Protocol identifies general information and technical specifications relevant in describing an ESS and also defines a set of test, measurement, and evaluation criteria with which to express the performance of ESSs that are intended for energy-intensive and/or power-intensive stationary applications. An ESS includes a storage device, battery management system, and any power conversion systems installed with the storage device. The Protocol is agnostic with respect to the storage technology and the size and rating of the ESS. The Protocol does not apply to single-use storage devices and storage devices that are not coupled with power conversion systems, nor does it address safety, security, or operations and maintenance of ESSs, or provide any pass/fail criteria.

  7. Characterization of sildenafil citrate tablets of different sources by near infrared chemical imaging and chemometric tools.

    Science.gov (United States)

    Sabin, Guilherme P; Lozano, Valeria A; Rocha, Werickson F C; Romão, Wanderson; Ortiz, Rafael S; Poppi, Ronei J

    2013-11-01

    The chemical imaging technique by near infrared spectroscopy was applied for characterization of formulations in tablets of sildenafil citrate of six different sources. Five formulations were provided by Brazilian Federal Police and correspond to several trademarks of prohibited marketing and one was an authentic sample of Viagra. In a first step of the study, multivariate curve resolution was properly chosen for the estimation of the distribution map of concentration of the active ingredient in tablets of different sources, where the chemical composition of all excipients constituents was not truly known. In such cases, it is very difficult to establish an appropriate calibration technique, so that only the information of sildenafil is considered independently of the excipients. This determination was possible only by reaching the second-order advantage, where the analyte quantification can be performed in the presence of unknown interferences. In a second step, the normalized histograms of images from active ingredient were grouped according to their similarities by hierarchical cluster analysis. Finally it was possible to recognize the patterns of distribution maps of concentration of sildenafil citrate, distinguishing the true formulation of Viagra. This concept can be used to improve the knowledge of industrial products and processes, as well as, for characterization of counterfeit drugs. Copyright © 2013. Published by Elsevier B.V.

  8. Rapid Source Characterization of the 2011 Mw 9.0 off the Pacific coast of Tohoku Earthquake

    Science.gov (United States)

    Hayes, Gavin P.

    2011-01-01

    On March 11th, 2011, a moment magnitude 9.0 earthquake struck off the coast of northeast Honshu, Japan, generating what may well turn out to be the most costly natural disaster ever. In the hours following the event, the U.S. Geological Survey National Earthquake Information Center led a rapid response to characterize the earthquake in terms of its location, size, faulting source, shaking and slip distributions, and population exposure, in order to place the disaster in a framework necessary for timely humanitarian response. As part of this effort, fast finite-fault inversions using globally distributed body- and surface-wave data were used to estimate the slip distribution of the earthquake rupture. Models generated within 7 hours of the earthquake origin time indicated that the event ruptured a fault up to 300 km long, roughly centered on the earthquake hypocenter, and involved peak slips of 20 m or more. Updates since this preliminary solution improve the details of this inversion solution and thus our understanding of the rupture process. However, significant observations such as the up-dip nature of rupture propagation and the along-strike length of faulting did not significantly change, demonstrating the usefulness of rapid source characterization for understanding the first order characteristics of major earthquakes.

  9. Characterizing Contamination and Assessing Exposure, Risk and Resilience

    Science.gov (United States)

    EPA supports its responders' ability to characterize site contamination by developing sampling protocols, sample preparation methods, and analytical methods for chemicals, biotoxins, microbial pathogens, and radiological agents.

  10. Techniques for physicochemical characterization of nanomaterials

    Science.gov (United States)

    Lin, Ping-Chang; Lin, Stephen; Wang, Paul C.; Sridhar, Rajagopalan

    2014-01-01

    Advances in nanotechnology have opened up a new era of diagnosis, prevention and treatment of diseases and traumatic injuries. Nanomaterials, including those with potential for clinical applications, possess novel physicochemical properties that have an impact on their physiological interactions, from the molecular level to the systemic level. There is a lack of standardized methodologies or regulatory protocols for detection or characterization of nanomaterials. This review summarizes the techniques that are commonly used to study the size, shape, surface properties, composition, purity and stability of nanomaterials, along with their advantages and disadvantages. At present there are no FDA guidelines that have been developed specifically for nanomaterial based formulations for diagnostic or therapeutic use. There is an urgent need for standardized protocols and procedures for the characterization of nanoparticles, especially those that are intended for use as theranostics. PMID:24252561

  11. Protocol for characterization of clay as a backfill and coverage layers for surface repository

    International Nuclear Information System (INIS)

    Santos, Daisy M.M.; Tello, Clédola C.O.

    2017-01-01

    The Radioactive Waste Management includes the operations since generation of the waste until its storage in repository, ensuring the protection of human beings and the environment from the possible negative impacts. The radioactive waste is segregated, treated, conditioned in suitable packages for posterior storage or disposal in repository. The 'RBMN Project' objective is to implement the repository for the disposal of low and intermediate level radioactive wastes generated by nuclear activities in Brazil, proposing a definitive solution for their storage. Engineered and natural barriers as the backfill and coverage layers will compose the disposal system of a near surface repository, concept proposed by the 'RBMN Project'. The use of these barriers aims to avoid or restrict the release of radionuclides from the waste to the human beings and environment. The waterproofing barriers are composed of clays. Certainly, for the national repository, will be used those clays existing in the place where it will be constructed. Them some basic tests will have to be carried out to verify the suitability of these clays as barriers. These tests were determined and performed with reference clay, a Brazilian bentonite constituted of 67.2% montmorillonite. The results were compared with national and international literature of materials with similar mineralogical features. The values found with 95% of confidence interval were 9.73±0,35 μm for granulometric size; 13,3±0,6% for the moisture content and 816±9 mmol.kg -1 for the capacity of cationic exchange. A protocol for characterization of clay was elaborated presenting these tests for it future use. (author)

  12. Protocol for characterization of clay as a backfill and coverage layers for surface repository

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Daisy M.M.; Tello, Clédola C.O., E-mail: marymarchezini@gmail.com, E-mail: tellocc@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2017-07-01

    The Radioactive Waste Management includes the operations since generation of the waste until its storage in repository, ensuring the protection of human beings and the environment from the possible negative impacts. The radioactive waste is segregated, treated, conditioned in suitable packages for posterior storage or disposal in repository. The 'RBMN Project' objective is to implement the repository for the disposal of low and intermediate level radioactive wastes generated by nuclear activities in Brazil, proposing a definitive solution for their storage. Engineered and natural barriers as the backfill and coverage layers will compose the disposal system of a near surface repository, concept proposed by the 'RBMN Project'. The use of these barriers aims to avoid or restrict the release of radionuclides from the waste to the human beings and environment. The waterproofing barriers are composed of clays. Certainly, for the national repository, will be used those clays existing in the place where it will be constructed. Them some basic tests will have to be carried out to verify the suitability of these clays as barriers. These tests were determined and performed with reference clay, a Brazilian bentonite constituted of 67.2% montmorillonite. The results were compared with national and international literature of materials with similar mineralogical features. The values found with 95% of confidence interval were 9.73±0,35 μm for granulometric size; 13,3±0,6% for the moisture content and 816±9 mmol.kg{sup -1} for the capacity of cationic exchange. A protocol for characterization of clay was elaborated presenting these tests for it future use. (author)

  13. Optimal beam sources for Stark decelerators in collision experiments: a tutorial review

    International Nuclear Information System (INIS)

    Vogels, Sjoerd N.; Gao, Zhi; Meerakker, Sebastiaan Y.T. van de

    2015-01-01

    With the Stark deceleration technique, packets of molecules with a tunable velocity, a narrow velocity spread, and a high state purity can be produced. These tamed molecular beams find applications in high resolution spectroscopy, cold molecule trapping, and controlled scattering experiments. The quality and purity of the packets of molecules emerging from the decelerator critically depend on the specifications of the decelerator, but also on the characteristics of the molecular beam pulse with which the decelerator is loaded. We consider three frequently used molecular beam sources, and discuss their suitability for molecular beam deceleration experiments, in particular with the application in crossed beam scattering in mind. The performance of two valves in particular, the Nijmegen Pulsed Valve and the Jordan Valve, is illustrated by decelerating ND 3 molecules in a 2.6 meter-long Stark decelerator. We describe a protocol to characterize the valve, and to optimally load the pulse of molecules into the decelerator. We characterize the valves regarding opening time duration, optimal valve-to-skimmer distance, mean velocity, velocity spread, state purity, and relative intensity. (orig.)

  14. The French dosimetry protocol

    International Nuclear Information System (INIS)

    Dutreix, A.

    1985-01-01

    After a general introduction the protocol is divided in five sections dealing with: determination of the quality of X-ray, γ-ray and electron beams; the measuring instrument; calibration of the reference instrument; determination of the reference absorbed dose in the user's beams; determination of the absorbed dose in water at other points, in other conditions. The French protocol is not essentially different from the Nordic protocol and it is based on the experience gained in using both the American and the Nordic protocols. Therefore, only the main difference with the published protocols are discussed. (Auth.)

  15. Single particle characterization, source apportionment, and aging effects of ambient aerosols in Southern California

    Science.gov (United States)

    Shields, Laura Grace

    Composed of a mixture of chemical species and phases and existing in a variety of shapes and sizes, atmospheric aerosols are complex and can have serious influence on human health, the environment, and climate. In order to better understand the impact of aerosols on local to global scales, detailed measurements on the physical and chemical properties of ambient particles are essential. In addition, knowing the origin or the source of the aerosols is important for policymakers to implement targeted regulations and effective control strategies to reduce air pollution in their region. One of the most ground breaking techniques in aerosol instrumentation is single particle mass spectrometry (SPMS), which can provide online chemical composition and size information on the individual particle level. The primary focus of this work is to further improve the ability of one specific SPMS technique, aerosol time-of-flight mass spectrometry (ATOFMS), for the use of identifying the specific origin of ambient aerosols, which is known as source apportionment. The ATOFMS source apportionment method utilizes a library of distinct source mass spectral signatures to match the chemical information of the single ambient particles. The unique signatures are obtained in controlled source characterization studies, such as with the exhaust emissions of heavy duty diesel vehicles (HDDV) operating on a dynamometer. The apportionment of ambient aerosols is complicated by the chemical and physical processes an individual particle can undergo as it spends time in the atmosphere, which is referred to as "aging" of the aerosol. Therefore, the performance of the source signature library technique was investigated on the ambient dataset of the highly aged environment of Riverside, California. Additionally, two specific subsets of the Riverside dataset (ultrafine particles and particles containing trace metals), which are known to cause adverse health effects, were probed in greater detail. Finally

  16. Characterization and packaging of disused sealed radioactive sources; Caracterizacion y acondicionamiento de fuentes radiactivas selladas en desuso

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, S.L. [Instituto Boliviano de Ciencia y Tecnologia Nuclear (IBTEN), La Paz (Bolivia, Plurinational State of)

    2013-07-01

    In Bolivia are generated disused sealed sources and radioactive waste resulting from the use of radioactive materials in industrial, research and medicine. The last includes the diagnosis and treatment. Whereas exposure to ionizing radiation is a potential hazard to personnel who applies it, to those who benefit from its use or for the community at large, it is necessary to control the activities in this field. The Instituto Boliviano de Ciencia y Tecnologia Nuclear - IBTEN is working on a regional project from International Atomic Energy Agency - IAEA, RLA/09/062 Project - TSA 4, Strengthening the National Infrastructure and Regulatory Framework for the Safe Management of Radioactive waste in Latin America. This Project has strengthened the regulatory framework regarding the safe management of radioactive waste. The aim of this work was focused primarily on the security aspects in the safe management of disused sealed sources. The tasks are listed below: 1. Characterization of disused sealed sources 2. Preparation for transport to temporary storage 3. Control of all disused radioactive sources. (author)

  17. Construction and validation of the Scale Sources of Information about AIDS (SSIA).

    Science.gov (United States)

    Chaves, Claudia; Pereira, Anabela; Duarte, João; Martins, Rosa; Nelas, Paula; Ferreira, Manuela

    2014-11-01

    To characterize sources of information students of higher education turn to for clarification about AIDS. Cross-sectional, non-experimental research, with the features of descriptive, correlational and explanatory studies. The data collection protocol includes personal and academic data and the sources of information about AIDS scale. 2002 students participated, 60.7% girls (X=21.76; years ± 4.43 SD), of the first and last years of higher education in the North and Centre of Portugal. Students rely mainly on reading informational materials for information about AIDS. Approximately 37% have good information on AIDS with young people up to the age of 25 and attending courses in field of health having higher scores. Changes are needed in health education models in the area of HIV/AIDS, since these are not showing a satisfactory level of efficiency. On the other hand, it is important to motivate young people to change their behaviours. Although many young people have knowledge, they do not change their risk behaviours. Copyright © 2014 Elsevier España, S.L.U. All rights reserved.

  18. Combining land use information and small stream sampling with PCR-based methods for better characterization of diffuse sources of human fecal pollution.

    Science.gov (United States)

    Peed, Lindsay A; Nietch, Christopher T; Kelty, Catherine A; Meckes, Mark; Mooney, Thomas; Sivaganesan, Mano; Shanks, Orin C

    2011-07-01

    Diffuse sources of human fecal pollution allow for the direct discharge of waste into receiving waters with minimal or no treatment. Traditional culture-based methods are commonly used to characterize fecal pollution in ambient waters, however these methods do not discern between human and other animal sources of fecal pollution making it difficult to identify diffuse pollution sources. Human-associated quantitative real-time PCR (qPCR) methods in combination with low-order headwatershed sampling, precipitation information, and high-resolution geographic information system land use data can be useful for identifying diffuse source of human fecal pollution in receiving waters. To test this assertion, this study monitored nine headwatersheds over a two-year period potentially impacted by faulty septic systems and leaky sanitary sewer lines. Human fecal pollution was measured using three different human-associated qPCR methods and a positive significant correlation was seen between abundance of human-associated genetic markers and septic systems following wet weather events. In contrast, a negative correlation was observed with sanitary sewer line densities suggesting septic systems are the predominant diffuse source of human fecal pollution in the study area. These results demonstrate the advantages of combining water sampling, climate information, land-use computer-based modeling, and molecular biology disciplines to better characterize diffuse sources of human fecal pollution in environmental waters.

  19. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    Science.gov (United States)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources

  20. Fast-grown CdS quantum dots: Single-source precursor approach vs microwave route

    Energy Technology Data Exchange (ETDEWEB)

    Fregnaux, Mathieu [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France); Dalmasso, Stéphane, E-mail: stephane.dalmasso@univ-lorraine.fr [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France); Durand, Pierrick [Laboratoire de Cristallographie, Résonance Magnétique et Modélisations, Institut Jean Barriol, Université de Lorraine, UMR CNRS 7036, Faculté des Sciences, BP 70239, 54506 Vandoeuvre lès Nancy (France); Zhang, Yudong [Laboratoire d' Etude des Microstructures et de Mécanique des Matériaux, Université de Lorraine, UMR CNRS 7239, Ile du Saulcy, 57045 Metz cedex 01 (France); Gaumet, Jean-Jacques; Laurenti, Jean-Pierre [Laboratoire de Chimie et Physique: Approche Multi-échelles des Milieux Complexes, Institut Jean Barriol, Université de Lorraine, 1 Boulevard Arago, 57070 Metz (France)

    2013-10-01

    A cross-disciplinary protocol of characterization by joint techniques enables one to closely compare chemical and physical properties of CdS quantum dots (QDs) grown by single source precursor methodology (SSPM) or by microwave synthetic route (MWSR). The results are discussed in relation with the synthesis protocols. The QD average sizes, reproducible as a function of the temperatures involved in the growth processes, range complementarily in 2.8–4.5 nm and 4.5–5.2 nm for SSPM and MWSR, respectively. Hexagonal and cubic structures after X-ray diffraction on SSPM and MWSR grown CdS QDs, respectively, are tentatively correlated to a better crystalline quality of the latter with respect to the further ones, suggested by (i) a remarkable stability of the MWSR grown QDs after exposure to air during several days and (ii) no evidence of their fragmentation during mass spectrometry (MS) analyses, after a fair agreement between size dispersities obtained by transmission electron microscopy (TEM) and MS, in contrast with the discrepancy found for the SSPM grown QDs. Correlatively, a better optical quality is suggested for the MWSR grown QDs by the resolution of n > 1 excitonic transitions in their absorption spectra. The QD average sizes obtained by TEM and deduced from MS are in overall agreement. This agreement is improved for the MWSR grown QDs, taking into account a prolate shape of the QDs also observed in the TEM images. For both series of samples, the excitonic responses vs the average sizes are consistent with the commonly admitted empirical energy-size correspondence. A low energy PL band is observed in the case of the SSPM grown QDs. Its decrease in intensity with QD size increase suggests a surface origin tentatively attributed to S vacancies. In the case of the MWSR grown QDs, the absence of this PL is tentatively correlated to an absence of S vacancies and therefore to the stable behavior observed when the QDs are exposed to air. - Highlights: • Single

  1. The HPA photon protocol and proposed electron protocol

    International Nuclear Information System (INIS)

    Pitchford, W.G.

    1985-01-01

    The Hospital Physicists Association (HPA) photon dosimetry protocol has been produced and was published in 1983. Revised values of some components of Csub(lambda) and refinements introduced into the theory in the last few years have enabled new Csub(lambda) values to be produced. The proposed HPA electron protocol is at present in draft form and will be published shortly. Both protocels are discussed. (Auth.)

  2. Routing protocol for wireless quantum multi-hop mesh backbone network based on partially entangled GHZ state

    Science.gov (United States)

    Xiong, Pei-Ying; Yu, Xu-Tao; Zhang, Zai-Chen; Zhan, Hai-Tao; Hua, Jing-Yu

    2017-08-01

    Quantum multi-hop teleportation is important in the field of quantum communication. In this study, we propose a quantum multi-hop communication model and a quantum routing protocol with multihop teleportation for wireless mesh backbone networks. Based on an analysis of quantum multi-hop protocols, a partially entangled Greenberger-Horne-Zeilinger (GHZ) state is selected as the quantum channel for the proposed protocol. Both quantum and classical wireless channels exist between two neighboring nodes along the route. With the proposed routing protocol, quantum information can be transmitted hop by hop from the source node to the destination node. Based on multi-hop teleportation based on the partially entangled GHZ state, a quantum route established with the minimum number of hops. The difference between our routing protocol and the classical one is that in the former, the processes used to find a quantum route and establish quantum channel entanglement occur simultaneously. The Bell state measurement results of each hop are piggybacked to quantum route finding information. This method reduces the total number of packets and the magnitude of air interface delay. The deduction of the establishment of a quantum channel between source and destination is also presented here. The final success probability of quantum multi-hop teleportation in wireless mesh backbone networks was simulated and analyzed. Our research shows that quantum multi-hop teleportation in wireless mesh backbone networks through a partially entangled GHZ state is feasible.

  3. 252Cf-source-driven noise analysis measurements for characterization of concrete highly enriched uranium (HEU) storage vaults

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1993-01-01

    The 252 Cf-source-driven noise analysis method has been used in measurements for subcritical configurations of fissile systems for a variety of applications. Measurements of 25 fissile systems have been performed with a wide variety of materials and configurations. This method has been applied to measurements for (1) initial fuel loading of reactors, (2) quality assurance of reactor fuel elements, (3) fuel preparation facilities, (4) fuel processing facilities, (5) fuel storage facilities, (6) zero-power testing of reactors, and (7) verification of calculational methods for assemblies with the neutron k 252 Cf source and commercially available detectors was feasible and to determine if the measurement could characterize the ability of the concrete to isolate the fissile material

  4. Low-level rf control of Spallation Neutron Source: System and characterization

    Directory of Open Access Journals (Sweden)

    Hengjie Ma

    2006-03-01

    Full Text Available The low-level rf control system currently commissioned throughout the Spallation Neutron Source (SNS LINAC evolved from three design iterations over 1 yr intensive research and development. Its digital hardware implementation is efficient, and has succeeded in achieving a minimum latency of less than 150 ns which is the key for accomplishing an all-digital feedback control for the full bandwidth. The control bandwidth is analyzed in frequency domain and characterized by testing its transient response. The hardware implementation also includes the provision of a time-shared input channel for a superior phase differential measurement between the cavity field and the reference. A companion cosimulation system for the digital hardware was developed to ensure a reliable long-term supportability. A large effort has also been made in the operation software development for the practical issues such as the process automations, cavity filling, beam loading compensation, and the cavity mechanical resonance suppression.

  5. The panel of egg allergens, Gal d 1-Gal d 5: Their improved purification and characterization

    DEFF Research Database (Denmark)

    Jacobsen, B.; Hoffmann-Sommergruber, K.; Have, T. T.

    2008-01-01

    Egg proteins represent one of the most important sources evoking food allergic reactions. In order to improve allergy diagnosis, purified and well-characterized proteins are needed. Although the egg white allergens Gal d 1, 2, 3 and 4 (ovomucoid, ovalbumin, ovotransferrin, and lysozyme......) are commercially available, these preparations contain impurities, which affect exact in vitro diagnosis. The aim of the present study was to set up further purification protocols and to extend the characterization of the physicochemical and immunological properties of the final batches. The egg white allergens...... Gal d 1-4 were purified from commercial preparations, whereas Gal d 5 (a-livetin) was purified from egg yolk. The final batches of Gal d 1-5 consisted of a range of isoforms with defined tertiary structure. In addition, the IgE binding capacity of the purified egg allergens was tested using allergic...

  6. Validation of the direct analysis in real time source for use in forensic drug screening.

    Science.gov (United States)

    Steiner, Robert R; Larson, Robyn L

    2009-05-01

    The Direct Analysis in Real Time (DART) ion source is a relatively new mass spectrometry technique that is seeing widespread use in chemical analyses world-wide. DART studies include such diverse topics as analysis of flavors and fragrances, melamine in contaminated dog food, differentiation of writing inks, characterization of solid counterfeit drugs, and as a detector for planar chromatography. Validation of this new technique for the rapid screening of forensic evidence for drugs of abuse, utilizing the DART source coupled to an accurate mass time-of-flight mass spectrometer, was conducted. The study consisted of the determination of the lower limit of detection for the method, determination of selectivity and a comparison of this technique to established analytical protocols. Examples of DART spectra are included. The results of this study have allowed the Virginia Department of Forensic Science to incorporate this new technique into their analysis scheme for the screening of solid dosage forms of drugs of abuse.

  7. Satellite Communications Using Commercial Protocols

    Science.gov (United States)

    Ivancic, William D.; Griner, James H.; Dimond, Robert; Frantz, Brian D.; Kachmar, Brian; Shell, Dan

    2000-01-01

    NASA Glenn Research Center has been working with industry, academia, and other government agencies in assessing commercial communications protocols for satellite and space-based applications. In addition, NASA Glenn has been developing and advocating new satellite-friendly modifications to existing communications protocol standards. This paper summarizes recent research into the applicability of various commercial standard protocols for use over satellite and space- based communications networks as well as expectations for future protocol development. It serves as a reference point from which the detailed work can be readily accessed. Areas that will be addressed include asynchronous-transfer-mode quality of service; completed and ongoing work of the Internet Engineering Task Force; data-link-layer protocol development for unidirectional link routing; and protocols for aeronautical applications, including mobile Internet protocol routing for wireless/mobile hosts and the aeronautical telecommunications network protocol.

  8. Source characterization using compound composition and stable carbon isotope ratio of PAHs in sediments from lakes, harbor, and shipping waterway

    International Nuclear Information System (INIS)

    Kim, Moonkoo; Kennicutt, Mahlon C.; Qian, Yaorong

    2008-01-01

    Molecular compositions and compound specific stable carbon isotope ratios of polycyclic aromatic hydrocarbons (PAH) isolated from sediments were used to characterize possible sources of contamination at an urban lake, a harbor, a shipping waterway, and a relatively undisturbed remote lake in the northwest United States. Total PAH concentrations in urban lake sediments ranged from 66.0 to 16,500 μg g -1 dry wt. with an average of 2600 μg g -1 , which is ∼ 50, 100, and 400 times higher on average than PAH in harbor (48 μg g -1 on average), shipping waterway (26 μg g -1 ), and remote lake (7 μg g -1 ) sediments, respectively. The PAH distribution patterns, methyl phenanthrene/phenanthrene ratios, and a pyrogenic index at the sites suggest a pyrogenic origin for PAHs. Source characterization using principal component analysis and various molecular indices including C2-dibenzothiophenes/C2-phenanthrenes, C3-dibenzothiophenes/C3-phenanthrenes, and C2-chrysenes/C2-phenanthrenes ratios, was able to differentiate PAH deposited in sediments from the four sites. The uniqueness of the source of the sediment PAHs from urban lake was also illustrated by compound specific stable carbon isotope analysis. It was concluded that urban lake sediments are accumulating PAH from sources that are unique from contamination detected at nearby sites in the same watershed

  9. An FPGA-Based System for Tracking Digital Information Transmitted Via Peer-to-Peer Protocols

    Science.gov (United States)

    2009-03-01

    that they term BLINd Classification (BLINC) [KPF05]. This classification framework attempts to characterize network flows on three levels: The social...Cohen. The BitTorrent Protocol Specification, February 2008. http://www.bittorrent.org/ beps / bep 0003.html. Cor05. CounterPath Corporation. Xten

  10. Security Protocols in a Nutshell

    OpenAIRE

    Toorani, Mohsen

    2016-01-01

    Security protocols are building blocks in secure communications. They deploy some security mechanisms to provide certain security services. Security protocols are considered abstract when analyzed, but they can have extra vulnerabilities when implemented. This manuscript provides a holistic study on security protocols. It reviews foundations of security protocols, taxonomy of attacks on security protocols and their implementations, and different methods and models for security analysis of pro...

  11. Strategy for Developing Expert-System-Based Internet Protocols (TCP/IP)

    Science.gov (United States)

    Ivancic, William D.

    1997-01-01

    The Satellite Networks and Architectures Branch of NASA's Lewis Research is addressing the issue of seamless interoperability of satellite networks with terrestrial networks. One of the major issues is improving reliable transmission protocols such as TCP over long latency and error-prone links. Many tuning parameters are available to enhance the performance of TCP including segment size, timers and window sizes. There are also numerous congestion avoidance algorithms such as slow start, selective retransmission and selective acknowledgment that are utilized to improve performance. This paper provides a strategy to characterize the performance of TCP relative to various parameter settings in a variety of network environments (i.e. LAN, WAN, wireless, satellite, and IP over ATM). This information can then be utilized to develop expert-system-based Internet protocols.

  12. Identification and characterization of five non-traditional-source categories: Catastrophic/accidental releases, vehicle repair facilities, recycling, pesticide application, and agricultural operations. Final report, September 1991-September 1992

    International Nuclear Information System (INIS)

    Sleva, S.; Pendola, J.A.; McCutcheon, J.; Jones, K.; Kersteter, S.L.

    1993-03-01

    The work is part of EPA's program to identify and characterize emissions sources not currently accounted for by either the existing Aerometric Information Retrieval System (AIRS) or State Implementation Plans (SIP) area source methodologies and to develop appropriate emissions estimation methodologies and emission factors for a group of these source categories. Based on the results of the identification and characterization portions of the research, five source categories were selected for methodology and emission factor development: catastrophic/accidental releases, vehicle repair facilities, recycling, pesticide application and agricultural operations. The report presents emissions estimation methodologies and emission factor data for the selected source categories. The discussions for each selected category include general background information, emissions generation activities, pollutants emitted, sources of activity and pollutant data, emissions estimation methodologies, issues to be considered and recommendations. The information used in these discussions was derived from various sources including available literature, industrial and trade association publications and contracts, experts on the category and activity, and knowledgeable federal and state personnel

  13. The Virtual Insect Brain protocol: creating and comparing standardized neuroanatomy

    Directory of Open Access Journals (Sweden)

    Schindelin Johannes E

    2006-12-01

    Full Text Available Abstract Background In the fly Drosophila melanogaster, new genetic, physiological, molecular and behavioral techniques for the functional analysis of the brain are rapidly accumulating. These diverse investigations on the function of the insect brain use gene expression patterns that can be visualized and provide the means for manipulating groups of neurons as a common ground. To take advantage of these patterns one needs to know their typical anatomy. Results This paper describes the Virtual Insect Brain (VIB protocol, a script suite for the quantitative assessment, comparison, and presentation of neuroanatomical data. It is based on the 3D-reconstruction and visualization software Amira, version 3.x (Mercury Inc. 1. Besides its backbone, a standardization procedure which aligns individual 3D images (series of virtual sections obtained by confocal microscopy to a common coordinate system and computes average intensities for each voxel (volume pixel the VIB protocol provides an elaborate data management system for data administration. The VIB protocol facilitates direct comparison of gene expression patterns and describes their interindividual variability. It provides volumetry of brain regions and helps to characterize the phenotypes of brain structure mutants. Using the VIB protocol does not require any programming skills since all operations are carried out at an intuitively usable graphical user interface. Although the VIB protocol has been developed for the standardization of Drosophila neuroanatomy, the program structure can be used for the standardization of other 3D structures as well. Conclusion Standardizing brains and gene expression patterns is a new approach to biological shape and its variability. The VIB protocol provides a first set of tools supporting this endeavor in Drosophila. The script suite is freely available at http://www.neurofly.de2

  14. Seismic source zone characterization for the seismic hazard assessment project PEGASOS by the Expert Group 2 (EG1b)

    International Nuclear Information System (INIS)

    Burkhard, M.; Gruenthal, G.

    2009-01-01

    A comprehensive study of the seismic hazard related to the four NNP sites in NW Switzerland was performed within the project PEGASOS. To account for the epistemic uncertainties involved in the process of the characterization of seismic source zones in the frame of probabilistic seismic hazard assessments, four different expert teams have developed and defended their models in the frame of an intensive elicitation process. Here, the results of one out of four expert groups are presented. The model of this team is based first of all on considerations regarding the large scale tectonics in the context of the Alpine collision, and neotectonic constraints for defining seismic source zones. This leads to a large scale subdivision based on the structural 'architectural' considerations with little input from the present seismicity. Each of the eight large zones was characterized by the style of present-day faulting, fault orientation, and hypo central depth distribution. A further subdivision of the larger zones is performed based on information provided by the seismicity patterns. 58 small source zones have been defined in this way, each of them characterized by the available tectonic constrains, as well as the pros and cons of different existing geologic views connected to them. Of special concern in this respect were the discussion regarding thin skinned vs. thick skinned tectonics, the tectonic origin of the 1356 Basel earthquake, the role of the Permo-Carboniferous graben structures, and finally the seismogenic orientation of faults with respect to the recent crustal stress field. The uncertainties connected to the delimitations of the small source zones have been handled in form of their regrouping, formalized by the logic tree technique. The maximum magnitudes were estimated as discretized probability distribution functions. After de-clustering the used ECOS earthquake catalogue and an analysis of data completeness as a function of time the parameters of the

  15. Keyboard with Universal Communication Protocol Applied to CNC Machine

    Directory of Open Access Journals (Sweden)

    Mejía-Ugalde Mario

    2014-04-01

    Full Text Available This article describes the use of a universal communication protocol for industrial keyboard based microcontroller applied to computer numerically controlled (CNC machine. The main difference among the keyboard manufacturers is that each manufacturer has its own programming of source code, producing a different communication protocol, generating an improper interpretation of the function established. The above results in commercial industrial keyboards which are expensive and incompatible in their connection with different machines. In the present work the protocol allows to connect the designed universal keyboard and the standard keyboard of the PC at the same time, it is compatible with all the computers through the communications USB, AT or PS/2, to use in CNC machines, with extension to other machines such as robots, blowing, injection molding machines and others. The advantages of this design include its easy reprogramming, decreased costs, manipulation of various machine functions and easy expansion of entry and exit signals. The results obtained of performance tests were satisfactory, because each key has the programmed and reprogrammed facility in different ways, generating codes for different functions, depending on the application where it is required to be used.

  16. Photon statistics characterization of a single-photon source

    International Nuclear Information System (INIS)

    Alleaume, R; Treussart, F; Courty, J-M; Roch, J-F

    2004-01-01

    In a recent experiment, we reported the time-domain intensity noise measurement of a single-photon source relying on single-molecule fluorescence control. In this paper, we present data processing starting from photocount timestamps. The theoretical analytical expression of the time-dependent Mandel parameter Q(T) of an intermittent single-photon source is derived from ON↔OFF dynamics. Finally, source intensity noise analysis, using the Mandel parameter, is quantitatively compared with the usual approach relying on the time autocorrelation function, both methods yielding the same molecular dynamical parameters

  17. Distributed Role Selection With ANC and TDBC Protocols in Two-Way Relaying Systems

    KAUST Repository

    Ding, Haiyang; da Costa, Daniel Benevides; Alouini, Mohamed-Slim; Ge, Jianhua; Gong, Feng-Kui

    2015-01-01

    and the scaling law of the system outage behavior at high signal-to-noise ratio (SNR) is characterized, which manifests that d-ROSE can enhance the system diversity gain to one-order higher relative to the ANC and TDBC protocols. It is also shown that d-ROSE can

  18. A Multipath Routing Protocol Based on Bloom Filter for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Junwei Jin

    2016-01-01

    Full Text Available On-demand multipath routing in a wireless ad hoc network is effective in achieving load balancing over the network and in improving the degree of resilience to mobility. In this paper, the salvage capable opportunistic node-disjoint multipath routing (SNMR protocol is proposed, which forms multiple routes for data transmission and supports packet salvaging with minimum overhead. The proposed mechanism constructs a primary path and a node-disjoint backup path together with alternative paths for the intermediate nodes in the primary path. It can be achieved by considering the reverse route back to the source stored in the route cache and the primary path information compressed by a Bloom filter. Our protocol presents higher capability in packet salvaging and lower overhead in forming multiple routes. Simulation results show that SNMR outperforms the compared protocols in terms of packet delivery ratio, normalized routing load, and throughput.

  19. The Groningen Protocol - the Jewish perspective.

    Science.gov (United States)

    Gesundheit, Benjamin; Steinberg, Avraham; Blazer, Shraga; Jotkowitz, Alan

    2009-01-01

    Despite significant advances in neonatology, there will always be newborns with serious life-threatening conditions creating most difficult bioethical dilemmas. Active euthanasia for adult patients is one of the most controversial bioethical questions; for severely ill neonates, the issue is even more complex, due to their inability to take part in any decision concerning their future. The Groningen Protocol introduced in 2005 by P.J. Sauer proposes criteria allowing active euthanasia for severely ill, not necessarily terminal, newborns with incurable conditions and poor quality of life in order to spare them unbearable suffering. We discuss the ethical dilemma and ideological foundations of the protocol, the opinions of its defenders and critics, and the dangers involved. The Jewish perspective relating to the subject is presented based on classical Jewish sources, which we trust may enrich modern bioethical debates. In Jewish law, the fetus acquires full legal status only after birth. However, while the lives of terminally ill neonates must in no way be actively destroyed or shortened, there is no obligation to make extraordinary efforts to prolong their lives. Accurate preimplantation or prenatal diagnosis might significantly reduce the incidence of nonviable births, but active killing of infants violates the basic foundations of Jewish law, and opens the 'slippery slope' for uncontrolled abuse. Therefore, we call upon the international medical and bioethical community to reject the Groningen Protocol that permits euthanization and to develop ethical guidelines for the optimal care of severely compromised neonates. Copyright 2009 S. Karger AG, Basel.

  20. Business protocol in integrated Europe

    OpenAIRE

    Pavelová, Nina

    2009-01-01

    The first chapter devotes to definitions of basic terms such as protocol or business protocol, to differences between protocol and etiquette, and between social etiquette and business etiquette. The second chapter focuses on the factors influencing the European business protocol. The third chapter is devoted to the etiquette of business protocol in the European countries. It touches the topics such as punctuality and planning of business appointment, greeting, business cards, dress and appear...

  1. Reliability of MEG source imaging of anterior temporal spikes: analysis of an intracranially characterized spike focus.

    Science.gov (United States)

    Wennberg, Richard; Cheyne, Douglas

    2014-05-01

    To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  2. Characterization of polarized electrons coming from helium post-discharge source; Caracterisation du faisceau d`electrons polarises issus d`une source a post-decharge d`helium

    Energy Technology Data Exchange (ETDEWEB)

    Zerhouni, R.O.

    1996-02-01

    The objective of this thesis is the characterization of the polarized electron source developed at Orsay and foreseen to be coupled to a cw accelerator for nuclear physics experiments. The principle of operation of this source relies on the chemo-ionization reaction between optically aligned helium triplet metastable atoms and CO{sub 2} molecules. The helium metastable atoms are generated by injection of purified helium into a 2,45 GHz micro-wave discharge. They are optically pumped using two beams of 1,083 micro-meter resonant radiation, one circularly and the other linearly polarized. Both beams are delivered by a high power LNA laser. The metastable atomic beam interacts with a dense (10{sup 13} cm {sup -3}) spin singlet CO{sub 2} target. A fraction of the produced polarized electrons is extracted and collimated by electrostatic optics. Either to the Mott polarimeter or to the Faraday cup in order to measure the electron polarization and extracted current. For current intensities of 100 micro-Amperes, the electronic polarization reaches 62 % and shows that this type of source has reached the same high competitive level as the most performing GaAs ones. Additionally, the optical properties of the extracted beam are found to be excellent. These properties (energy spread and emittance) reflect the electron energy distribution at the chemo-ionization region. The upper limit of the beam`s energy spread is 0.24 eV since this value characterizes our instrumental resolution. The average normalized emittance is found to be 0.6 pi mm-mrad. These values satisfy the requirements of most cw accelerators. All the measurements were performed at low electron beam transport energies (1 to 2 KeV). (author). 105 refs., 54 figs., 4 tabs.

  3. Molecular Characterization of Yeast Strains Isolated from Different Sources by Restriction Fragment Length Polymorphism

    International Nuclear Information System (INIS)

    Ali, M. S.; Latif, Z.

    2016-01-01

    Various molecular techniques like analysis of the amplified rDNA internal transcribed spacers (ITS), intragenic spacers and total ITS region analysis by restriction fragment length polymorphism (RFLP) has been introduced for yeast identification but there are limited databases to identify yeast species on the basis of 5.8S rDNA. In this study, twenty nine yeast strains from various sources including spoiled fruits, vegetables, foodstuffs, and concentrated juices were characterized by PCR-RFLP. PCR-RFLP has been used to characterize yeasts present in different spoiled food samples after isolation of the yeasts. By using this technique, the isolated yeast strains were characterized by direct 5.8S-ITS rDNA region amplification. RFLP analysis was applied to each of the amplification products (varied from 400bp to 800bp) detected, and the corresponding yeast identifications were made according to each specific restriction patterns obtained after treatment with two endonucleases TaqI and HaeIII which yielded a specific banding pattern for each species. For further confirmation amplified products of eleven selected isolates were sequenced and blast on NCBI. Both RFLP and sequence analyses of the strains with accession nos. KF472163, KF472164, KF472165, KF472166, KF472167, KF472168, KF472169, KF472170, KF472171, KF472172, KF472173 gave significantly similar results. The isolates were found to belong five different yeast species including; Candida spp., Pichia spp., Kluyveromyces spp., Clavispora spp. and Hanseniaspora spp. This method provides a fast, easy, reliable and authentic way for determining yeast population present in different type of samples, as compared to traditional characterization technique. (author)

  4. Multipath Activity Based Routing Protocol for Mobile ‎Cognitive Radio Ad Hoc Networks

    Directory of Open Access Journals (Sweden)

    Shereen Omar

    2017-01-01

    Full Text Available Cognitive radio networks improve spectrum utilization by ‎sharing licensed spectrum with cognitive radio devices. In ‎cognitive radio ad hoc networks the routing protocol is one ‎of the most challenging tasks due to the changes in ‎frequency spectrum and the interrupted connectivity ‎caused by the primary user activity. In this paper, a multi‎path activity based routing protocol for cognitive radio ‎network (MACNRP is proposed. The protocol utilizes ‎channel availability and creates multiple node-disjoint ‎routes between the source and destination nodes. The ‎proposed protocol is compared with D2CARP and FTCRP ‎protocols. The performance evaluation is conducted ‎through mathematical analysis and using OPNET ‎simulation. The performance of the proposed protocol ‎achieves an increase in network throughput; besides it ‎decreases the probability of route failure due to node ‎mobility and primary user activity. We have found that the ‎MACNRP scheme results in 50% to 75% reduction in ‎blocking probability and 33% to 78% improvement in ‎network throughput, with a reasonable additional routing ‎overhead and average packet delay. Due to the successful ‎reduction of collision between primary users and ‎cognitive users, the MACNRP scheme results in decreasing ‎the path failure rate by 50% to 87%.‎

  5. The Montreal Protocol treaty and its illuminating history of science-policy decision-making

    Science.gov (United States)

    Grady, C.

    2017-12-01

    The Montreal Protocol on Substances that Deplete the Ozone Layer, hailed as one of the most effective environmental treaties of all time, has a thirty year history of science-policy decision-making. The partnership between Parties to the Montreal Protocol and its technical assessment panels serve as a basis for understanding successes and evaluating stumbles of global environmental decision-making. Real-world environmental treaty negotiations can be highly time-sensitive, politically motivated, and resource constrained thus scientists and policymakers alike are often unable to confront the uncertainties associated with the multitude of choices. The science-policy relationship built within the framework of the Montreal Protocol has helped constrain uncertainty and inform policy decisions but has also highlighted the limitations of the use of scientific understanding in political decision-making. This talk will describe the evolution of the scientist-policymaker relationship over the history of the Montreal Protocol. Examples will illustrate how the Montreal Protocol's technical panels inform decisions of the country governments and will characterize different approaches pursued by different countries with a particular focus on the recently adopted Kigali Amendment. In addition, this talk will take a deeper dive with an analysis of the historic technical panel assessments on estimating financial resources necessary to enable compliance to the Montreal Protocol compared to the political financial decisions made through the Protocol's Multilateral Fund replenishment negotiation process. Finally, this talk will describe the useful lessons and challenges from these interactions and how they may be applicable in other environmental management frameworks across multiple scales under changing climatic conditions.

  6. A robust ECC based mutual authentication protocol with anonymity for session initiation protocol.

    Science.gov (United States)

    Mehmood, Zahid; Chen, Gongliang; Li, Jianhua; Li, Linsen; Alzahrani, Bander

    2017-01-01

    Over the past few years, Session Initiation Protocol (SIP) is found as a substantial application-layer protocol for the multimedia services. It is extensively used for managing, altering, terminating and distributing the multimedia sessions. Authentication plays a pivotal role in SIP environment. Currently, Lu et al. presented an authentication protocol for SIP and profess that newly proposed protocol is protected against all the familiar attacks. However, the detailed analysis describes that the Lu et al.'s protocol is exposed against server masquerading attack and user's masquerading attack. Moreover, it also fails to protect the user's identity as well as it possesses incorrect login and authentication phase. In order to establish a suitable and efficient protocol, having ability to overcome all these discrepancies, a robust ECC-based novel mutual authentication mechanism with anonymity for SIP is presented in this manuscript. The improved protocol contains an explicit parameter for user to cope the issues of security and correctness and is found to be more secure and relatively effective to protect the user's privacy, user's masquerading and server masquerading as it is verified through the comprehensive formal and informal security analysis.

  7. A robust ECC based mutual authentication protocol with anonymity for session initiation protocol.

    Directory of Open Access Journals (Sweden)

    Zahid Mehmood

    Full Text Available Over the past few years, Session Initiation Protocol (SIP is found as a substantial application-layer protocol for the multimedia services. It is extensively used for managing, altering, terminating and distributing the multimedia sessions. Authentication plays a pivotal role in SIP environment. Currently, Lu et al. presented an authentication protocol for SIP and profess that newly proposed protocol is protected against all the familiar attacks. However, the detailed analysis describes that the Lu et al.'s protocol is exposed against server masquerading attack and user's masquerading attack. Moreover, it also fails to protect the user's identity as well as it possesses incorrect login and authentication phase. In order to establish a suitable and efficient protocol, having ability to overcome all these discrepancies, a robust ECC-based novel mutual authentication mechanism with anonymity for SIP is presented in this manuscript. The improved protocol contains an explicit parameter for user to cope the issues of security and correctness and is found to be more secure and relatively effective to protect the user's privacy, user's masquerading and server masquerading as it is verified through the comprehensive formal and informal security analysis.

  8. The precision and robustness of published protocols for disc diffusion assays of antimicrobial agent susceptibility: an inter-laboratory study

    DEFF Research Database (Denmark)

    Gabhainn, S.N.; Bergh, Ø.; Dixon, B.

    2004-01-01

    for each agent being 11.1%. Significant influences on zone size were detected for all three parameters of the protocol. Media source effects were particularly notable with respect to oxytetracycline and oxolinic acid discs, disc source effects with respect to ampicillin and sulphamethoxazole...

  9. CHARACTERIZATION OF NITROUS OXIDE EMISSION SOURCES

    Science.gov (United States)

    The report presents a global inventory of nitrous oxide (N2O) based on reevaluation of previous estimates and additions of previously uninventoried source categories. (NOTE: N2O is both a greenhouse gas and a precursor of nitric oxide (NO) which destroys stratospheric ozone.) The...

  10. Power Saving MAC Protocols for WSNs and Optimization of S-MAC Protocol

    Directory of Open Access Journals (Sweden)

    Simarpreet Kaur

    2012-11-01

    Full Text Available Low power MAC protocols have received a lot of consideration in the last few years because of their influence on the lifetime of wireless sensor networks. Since, sensors typically operate on batteries, replacement of which is often difficult. A lot of work has been done to minimize the energy expenditure and prolong the sensor lifetime through energy efficient designs, across layers. Meanwhile, the sensor network should be able to maintain a certain throughput in order to fulfill the QoS requirements of the end user, and to ensure the constancy of the network. This paper introduces different types of MAC protocols used for WSNs and proposes S‐MAC, a Medium‐Access Control protocol designed for Wireless Sensor Networks. S‐MAC uses a few innovative techniques to reduce energy consumption and support selfconfiguration. A new protocol is suggested to improve the energy efficiency, latency and throughput of existing MAC protocol for WSNs. A modification of the protocol is then proposed to eliminate the need for some nodes to stay awake longer than the other nodes which improves the energy efficiency, latency and throughput and hence increases the life span of a wireless sensor network.

  11. Volumetric plasma source development and characterization

    International Nuclear Information System (INIS)

    Crain, Marlon D.; Maron, Yitzhak; Oliver, Bryan Velten; Starbird, Robert L.; Johnston, Mark D.; Hahn, Kelly Denise; Mehlhorn, Thomas Alan; Droemer, Darryl W.

    2008-01-01

    The development of plasma sources with densities and temperatures in the 10 15 -10 17 cm -3 and 1-10eV ranges which are slowly varying over several hundreds of nanoseconds within several cubic centimeter volumes is of interest for applications such as intense electron beam focusing as part of the x-ray radiography program. In particular, theoretical work (1,2) suggests that replacing neutral gas in electron beam focusing cells with highly conductive, pre-ionized plasma increases the time-averaged e-beam intensity on target, resulting in brighter x-ray sources. This LDRD project was an attempt to generate such a plasma source from fine metal wires. A high voltage (20-60kV), high current (12-45kA) capacitive discharge was sent through a 100 (micro)m diameter aluminum wire forming a plasma. The plasma's expansion was measured in time and space using spectroscopic techniques. Lineshapes and intensities from various plasma species were used to determine electron and ion densities and temperatures. Electron densities from the mid-10 15 to mid-10 16 cm -3 were generated with corresponding electron temperatures of between 1 and 10eV. These parameters were measured at distances of up to 1.85 cm from the wire surface at times in excess of 1 (micro)s from the initial wire breakdown event. In addition, a hydrocarbon plasma from surface contaminants on the wire was also measured. Control of these contaminants by judicious choice of wire material, size, and/or surface coating allows for the ability to generate plasmas with similar density and temperature to those given above, but with lower atomic masses

  12. Energy Reduction Multipath Routing Protocol for MANET Using Recoil Technique

    Directory of Open Access Journals (Sweden)

    Rakesh Kumar Sahu

    2018-04-01

    maximization (AOMR-LM and Source routing-based multicast protocol (SRMP algorithms. Hence, the AOMDV-ER algorithm performs better than these recently developed algorithms.

  13. On-farm comparisons of different cleaning protocols in broiler houses.

    Science.gov (United States)

    Luyckx, K Y; Van Weyenberg, S; Dewulf, J; Herman, L; Zoons, J; Vervaet, E; Heyndrickx, M; De Reu, K

    2015-08-01

    The present study evaluated the effectiveness of 4 cleaning protocols designed to reduce the bacteriological infection pressure on broiler farms and prevent food-borne zoonoses. Additionally, difficult to clean locations and possible sources of infection were identified. Cleaning and disinfection rounds were evaluated in 12 broiler houses on 5 farms through microbiological analyses and adenosine triphosphate hygiene monitoring. Samples were taken at 3 different times: before cleaning, after cleaning, and after disinfection. At each sampling time, swabs were taken from various locations for enumeration of the total aerobic flora and Enterococcus species pluralis ( SPP:). In addition, before cleaning and after disinfection, testing for Escherichia coli and Salmonella was carried out. Finally, adenosine triphosphate swabs and agar contact plates for total aerobic flora counts were taken after cleaning and disinfection, respectively. Total aerobic flora and Enterococcus spp. counts on the swab samples showed that cleaning protocols which were preceded by an overnight soaking with water caused a higher bacterial reduction compared to protocols without a preceding soaking step. Moreover, soaking of broiler houses leads to less water consumption and reduced working time during high pressure cleaning. No differences were found between protocols using cold or warm water during cleaning. Drinking cups, drain holes, and floor cracks were identified as critical locations for cleaning and disinfection in broiler houses. © 2015 Poultry Science Association Inc.

  14. Cross-Layer Protocol as a Better Option in Wireless Mesh Network with Respect to Layered-Protocol

    OpenAIRE

    Ahmed Abdulwahab Al-Ahdal; Dr. V. P. Pawar; G. N. Shinde

    2014-01-01

    The Optimal way to improve Wireless Mesh Networks (WMNs) performance is to use a better network protocol, but whether layered-protocol design or cross-layer design is a better option to optimize protocol performance in WMNs is still an on-going research topic. In this paper, we focus on cross-layer protocol as a better option with respect to layered-protocol. The layered protocol architecture (OSI) model divides networking tasks into layers and defines a pocket of services for each layer to b...

  15. Production and characterization of polyhydroxybutyrate from Vibrio harveyi MCCB 284 utilizing glycerol as carbon source.

    Science.gov (United States)

    Mohandas, S P; Balan, L; Lekshmi, N; Cubelio, S S; Philip, R; Bright Singh, I S

    2017-03-01

    Production and characterization of polyhydroxybutyrate (PHB) from moderately halophilic bacterium Vibrio harveyi MCCB 284 isolated from tunicate Phallusia nigra. Twenty-five bacterial isolates were obtained from tunicate samples and three among them exhibited an orange fluorescence in Nile red staining indicating the presence of PHB. One of the isolates, MCCB 284, which showed rapid growth and good polymer yield, was identified as V. harveyi. The optimum conditions of the isolate for the PHB production were pH 8·0, sodium chloride concentration 20 g l -1 , inoculum size 0·5% (v/v), glycerol 20 g l -1 and 72 h of incubation at 30°C. Cell dry weight (CDW) of 3·2 g l -1 , PHB content of 2·3 g l -1 and final PHB yield of 1·2 g l -1 were achieved. The extracted PHB was characterized by FTIR, NMR and DSC-TGA techniques. An isolate of V. harveyi that could effectively utilize glycerol for growth and PHB accumulation was obtained from tunicate P. nigra. PHB produced was up to 72% based on CDW. This is the first report of an isolate of V. harveyi which utilizes glycerol as the sole carbon source for PHB production with high biomass yield. This isolate could be of use as candidate species for commercial PHB production using glycerol as the feed stock or as source of genes for recombinant PHB production or for synthetic biology. © 2016 The Society for Applied Microbiology.

  16. Seasonal and spatial variation of trace elements and metals in quasi-ultrafine (PM0.25) particles in the Los Angeles metropolitan area and characterization of their sources

    International Nuclear Information System (INIS)

    Saffari, Arian; Daher, Nancy; Shafer, Martin M.; Schauer, James J.; Sioutas, Constantinos

    2013-01-01

    Year-long sampling campaign of quasi-ultrafine particles (PM 0.25 ) was conducted at 10 distinct locations across the Los Angeles south coast air basin and concentrations of trace elements and metals were quantified at each site using high-resolution inductively coupled plasma sector field mass spectrometry. In order to characterize sources of trace elements and metals, principal component analysis (PCA) was applied to the dataset. The major sources were identified as road dust (influenced by vehicular emissions as well as re-suspended soil), vehicular abrasion, residual oil combustion, cadmium sources and metal plating. These sources altogether accounted for approximately 85% of the total variance of quasi-ultrafine elemental content. The concentrations of elements originating from source and urban locations generally displayed a decline as we proceeded from the coast to the inland. Occasional concentration peaks in the rural receptor sites were also observed, driven by the dominant westerly/southwesterly wind transporting the particles to the receptor areas. -- Highlights: •We collected quasi-ultrafine samples at 10 locations across the Los Angeles Basin. •The concentration of trace elements and metals at each site were quantified. •Distinct temporal and spatial variability was observed across the basin. •Principal component analysis was applied to the data to characterize the sources. •Five major sources were identified for quasi-ultrafine elemental content. -- Characterization of sources of trace elements and metals in quasi-ultrafine particles in the Los Angeles south coast air basin and explaining their seasonal and spatial variability

  17. [Professor Xu Fu-song's traditional Chinese medicine protocols for male diseases: A descriptive analysis].

    Science.gov (United States)

    Liu, Cheng-yong; Xu, Fu-song

    2015-04-01

    To analyze the efficacy and medication principles of Professor Xu Fu-songs traditional Chinese medicine (TCM) protocols for male diseases. We reviewed and descriptively analyzed the unpublished complete medical records of 100 male cases treated by Professor Xu Fu-song with his TCM protocols from 1978 to 1992. The 100 cases involved 32 male diseases, most of which were difficult and complicated cases. The drug compliance was 95%. Each prescription was made up of 14 traditional Chinese drugs on average. The cure rate was 32% , and the effective rate was 85%. Professor Xu Fu-song advanced and proved some new theories and therapeutic methods. Professor Xu Fu-song's TCM protocols can be applied to a wide range of male diseases, mostly complicated, and are characterized by accurate differentiation of symptoms and signs, high drug compliance, and excellent therapeutic efficacy.

  18. Mechanical characterization and modeling of SiCF/SiC composite tubes

    International Nuclear Information System (INIS)

    Rohmer, E.

    2013-01-01

    This work is part of the development of the 4. generation of nuclear reactors. It relates more precisely to the composite portion of the sandwich type tubular cladding considered by the CEA for RNR-NA/Gaz type reactors. The texture is formed by a braiding technique and the study focuses on interlocks braided composite. These relatively new structures require extensive mechanical characterization. Two experimental protocols were developed to conduct tensile and internal pressure tests on tubes. Three different textures have been characterized. In addition, a multi-scale model was developed to connect the microstructure of the tube to its mechanical properties. This model is validated for the elastic behavior of a characterized texture. A first approach to the damage in the structure is proposed and a possible improved protocol is discussed. (author) [fr

  19. A Selective-Awakening MAC Protocol for Energy-Efficient Data Forwarding in Linear Sensor Networks

    Directory of Open Access Journals (Sweden)

    Iclia Villordo-Jimenez

    2018-01-01

    Full Text Available We introduce the Selective-Awakening MAC (SA-MAC protocol which is a synchronized duty-cycled protocol with pipelined scheduling for Linear Sensor Networks (LSNs. In the proposed protocol, nodes selectively awake depending on node density and traffic load conditions and on the state of the buffers of the receiving nodes. In order to characterize the performance of the proposed protocol, we present a Discrete-Time Markov Chain-based analysis that is validated through extensive discrete-event simulations. Our results show that SA-MAC significantly outperforms previous proposals in terms of energy consumption, throughput, and packet loss probability. This is particularly true under high node density and high traffic load conditions, which are expected to be common scenarios in the context of IoT applications. We also present an analysis by grade (i.e., the number of hops to the sink, which is located at one end of the LSN that reveals that LSNs exhibit heterogeneous performance depending on the nodes’ grade. Such results can be used as a design guideline for future LSN implementations.

  20. Current characterization methods for cellulose nanomaterials.

    Science.gov (United States)

    Foster, E Johan; Moon, Robert J; Agarwal, Umesh P; Bortner, Michael J; Bras, Julien; Camarero-Espinosa, Sandra; Chan, Kathleen J; Clift, Martin J D; Cranston, Emily D; Eichhorn, Stephen J; Fox, Douglas M; Hamad, Wadood Y; Heux, Laurent; Jean, Bruno; Korey, Matthew; Nieh, World; Ong, Kimberly J; Reid, Michael S; Renneckar, Scott; Roberts, Rose; Shatkin, Jo Anne; Simonsen, John; Stinson-Bagby, Kelly; Wanasekara, Nandula; Youngblood, Jeff

    2018-04-23

    A new family of materials comprised of cellulose, cellulose nanomaterials (CNMs), having properties and functionalities distinct from molecular cellulose and wood pulp, is being developed for applications that were once thought impossible for cellulosic materials. Commercialization, paralleled by research in this field, is fueled by the unique combination of characteristics, such as high on-axis stiffness, sustainability, scalability, and mechanical reinforcement of a wide variety of materials, leading to their utility across a broad spectrum of high-performance material applications. However, with this exponential growth in interest/activity, the development of measurement protocols necessary for consistent, reliable and accurate materials characterization has been outpaced. These protocols, developed in the broader research community, are critical for the advancement in understanding, process optimization, and utilization of CNMs in materials development. This review establishes detailed best practices, methods and techniques for characterizing CNM particle morphology, surface chemistry, surface charge, purity, crystallinity, rheological properties, mechanical properties, and toxicity for two distinct forms of CNMs: cellulose nanocrystals and cellulose nanofibrils.

  1. [Multidisciplinary protocol for computed tomography imaging and angiographic embolization of splenic injury due to trauma: assessment of pre-protocol and post-protocol outcomes].

    Science.gov (United States)

    Koo, M; Sabaté, A; Magalló, P; García, M A; Domínguez, J; de Lama, M E; López, S

    2011-11-01

    To assess conservative treatment of splenic injury due to trauma, following a protocol for computed tomography (CT) and angiographic embolization. To quantify the predictive value of CT for detecting bleeding and need for embolization. The care protocol developed by the multidisciplinary team consisted of angiography with embolization of lesions revealed by contrast extravasation under CT as well as embolization of grade III-V injuries observed, or grade I-II injuries causing hemodynamic instability and/or need for blood transfusion. We collected data on demographic variables, injury severity score (ISS), angiographic findings, and injuries revealed by CT. Pre-protocol and post-protocol outcomes were compared. The sensitivity and specificity of CT findings were calculated for all patients who required angiographic embolization. Forty-four and 30 angiographies were performed in the pre- and post-protocol periods, respectively. The mean (SD) ISSs in the two periods were 25 (11) and 26 (12), respectively. A total of 24 (54%) embolizations were performed in the pre-protocol period and 28 (98%) after implementation of the protocol. Two and 7 embolizations involved the spleen in the 2 periods, respectively; abdominal laparotomies numbered 32 and 25, respectively, and 10 (31%) vs 4 (16%) splenectomies were performed. The specificity and sensitivity values for contrast extravasation found on CT and followed by embolization were 77.7% and 79.5%. The implementation of this multidisciplinary protocol using CT imaging and angiographic embolization led to a decrease in the number of splenectomies. The protocol allows us to take a more conservative treatment approach.

  2. Miniaturized pulsed laser source for time-domain diffuse optics routes to wearable devices.

    Science.gov (United States)

    Di Sieno, Laura; Nissinen, Jan; Hallman, Lauri; Martinenghi, Edoardo; Contini, Davide; Pifferi, Antonio; Kostamovaara, Juha; Mora, Alberto Dalla

    2017-08-01

    We validate a miniaturized pulsed laser source for use in time-domain (TD) diffuse optics, following rigorous and shared protocols for performance assessment of this class of devices. This compact source (12×6  mm2) has been previously developed for range finding applications and is able to provide short, high energy (∼100  ps, ∼0.5  nJ) optical pulses at up to 1 MHz repetition rate. Here, we start with a basic level laser characterization with an analysis of suitability of this laser for the diffuse optics application. Then, we present a TD optical system using this source and its performances in both recovering optical properties of tissue-mimicking homogeneous phantoms and in detecting localized absorption perturbations. Finally, as a proof of concept of in vivo application, we demonstrate that the system is able to detect hemodynamic changes occurring in the arm of healthy volunteers during a venous occlusion. Squeezing the laser source in a small footprint removes a key technological bottleneck that has hampered so far the realization of a miniaturized TD diffuse optics system, able to compete with already assessed continuous-wave devices in terms of size and cost, but with wider performance potentialities, as demonstrated by research over the last two decades. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  3. Standardization of Nanoparticle Characterization: Methods for Testing Properties, Stability, and Functionality of Edible Nanoparticles.

    Science.gov (United States)

    McClements, Jake; McClements, David Julian

    2016-06-10

    There has been a rapid increase in the fabrication of various kinds of edible nanoparticles for oral delivery of bioactive agents, such as those constructed from proteins, carbohydrates, lipids, and/or minerals. It is currently difficult to compare the relative advantages and disadvantages of different kinds of nanoparticle-based delivery systems because researchers use different analytical instruments and protocols to characterize them. In this paper, we briefly review the various analytical methods available for characterizing the properties of edible nanoparticles, such as composition, morphology, size, charge, physical state, and stability. This information is then used to propose a number of standardized protocols for characterizing nanoparticle properties, for evaluating their stability to environmental stresses, and for predicting their biological fate. Implementation of these protocols would facilitate comparison of the performance of nanoparticles under standardized conditions, which would facilitate the rational selection of nanoparticle-based delivery systems for different applications in the food, health care, and pharmaceutical industries.

  4. Cryptographic Protocols:

    DEFF Research Database (Denmark)

    Geisler, Martin Joakim Bittel

    cryptography was thus concerned with message confidentiality and integrity. Modern cryptography cover a much wider range of subjects including the area of secure multiparty computation, which will be the main topic of this dissertation. Our first contribution is a new protocol for secure comparison, presented...... implemented the comparison protocol in Java and benchmarks show that is it highly competitive and practical. The biggest contribution of this dissertation is a general framework for secure multiparty computation. Instead of making new ad hoc implementations for each protocol, we want a single and extensible...... in Chapter 2. Comparisons play a key role in many systems such as online auctions and benchmarks — it is not unreasonable to say that when parties come together for a multiparty computation, it is because they want to make decisions that depend on private information. Decisions depend on comparisons. We have...

  5. Energy Efficient Medium Access Control Protocol for Clustered Wireless Sensor Networks with Adaptive Cross-Layer Scheduling.

    Science.gov (United States)

    Sefuba, Maria; Walingo, Tom; Takawira, Fambirai

    2015-09-18

    This paper presents an Energy Efficient Medium Access Control (MAC) protocol for clustered wireless sensor networks that aims to improve energy efficiency and delay performance. The proposed protocol employs an adaptive cross-layer intra-cluster scheduling and an inter-cluster relay selection diversity. The scheduling is based on available data packets and remaining energy level of the source node (SN). This helps to minimize idle listening on nodes without data to transmit as well as reducing control packet overhead. The relay selection diversity is carried out between clusters, by the cluster head (CH), and the base station (BS). The diversity helps to improve network reliability and prolong the network lifetime. Relay selection is determined based on the communication distance, the remaining energy and the channel quality indicator (CQI) for the relay cluster head (RCH). An analytical framework for energy consumption and transmission delay for the proposed MAC protocol is presented in this work. The performance of the proposed MAC protocol is evaluated based on transmission delay, energy consumption, and network lifetime. The results obtained indicate that the proposed MAC protocol provides improved performance than traditional cluster based MAC protocols.

  6. Remote control of the industry processes. POWERLINK protocol application

    Science.gov (United States)

    Wóbel, A.; Paruzel, D.; Paszkiewicz, B.

    2017-08-01

    The present technological development enables the use of solutions characterized by a lower failure rate, and work with greater precision. This allows you to obtain the most efficient production, high speed production and reliability of individual components. The main scope of this article was POWERLINK protocol application for communication with the controller B & R through communication Ethernet for recording process parameters. This enables control of run production cycle using an internal network connected to the PC industry. Knowledge of the most important parameters of the production in real time allows detecting of a failure immediately after occurrence. For this purpose, the position of diagnostic use driver X20CP1301 B&R to record measurement data such as pressure, temperature valve between the parties and the torque required to change the valve setting was made. The use of POWERLINK protocol allows for the transmission of information on the status of every 200 μs.

  7. Identification and characterization of fine and coarse particulate matter sources in a middle-European urban environment

    International Nuclear Information System (INIS)

    Kertesz, Zs.; Szoboszlai, Z.; Angyal, A.; Dobos, E.; Borbely-Kiss, I.

    2010-01-01

    In this work a source apportionment study is presented which aimed to characterize the PM 2.5 and PM 2.5-10 sources in the urban area of Debrecen, East-Hungary by using streaker samples, IBA methods and positive matrix factorization (PMF) analysis. Samples of fine (PM 2.5 ) and coarse (PM 2.5-10 ) urban particulate matter were collected with 2 h time resolution in the frame of five sampling campaigns during 2007-2009 in different seasons in the downtown of Debrecen. Elemental concentrations from Al to Pb of over 1000 samples were obtained by particle induced X-ray emission (PIXE); concentrations of black carbon (BC) were determined with a smoke stain reflectometer. On this data base source apportionment was carried out by using the PMF method. Seven factors were identified for both size fractions, including soil dust, traffic, secondary aerosol - sulphates, domestic heating, oil combustion, agriculture and an unknown factor enriched with chlorine. Seasonal and daily variation of the different factors was studied as well as their dependence on meteorological parameters. Besides determining the time patterns characteristic to the city, several emission episodes were identified including a Saharan dust intrusion on 21st-24th May, 2008.

  8. Chapter 23: Combined Heat and Power Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    Energy Technology Data Exchange (ETDEWEB)

    Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simons, George [Itron, Davis, CA (United States); Barsun, Stephan [Itron, Davis, CA (United States)

    2017-11-06

    The main focus of most evaluations is to determine the energy-savings impacts of the installed measure. This protocol defines a combined heat and power (CHP) measure as a system that sequentially generates both electrical energy and useful thermal energy from one fuel source at a host customer's facility or residence. This protocol is aimed primarily at regulators and administrators of ratepayer-funded CHP programs; however, project developers may find the protocol useful to understand how CHP projects are evaluated.

  9. Lead isotopic fingerprinting of aerosols to characterize the sources of atmospheric lead in an industrial city of India

    Science.gov (United States)

    Sen, Indra S.; Bizimis, Michael; Tripathi, Sachchida Nand; Paul, Debajyoti

    2016-03-01

    Anthropogenic Pb in the environment is primarily sourced from combustion of fossil fuel and high-temperature industries such as smelters. Identifying the sources and pathways of anthropogenic Pb in the environment is important because Pb toxicity is known to have adverse effects on human health. Pb pollution sources for America, Europe, and China are well documented. However, sources of atmospheric Pb are unknown in India, particularly after leaded gasoline was phased out in 2000. India has a developing economy with a rapidly emerging automobile and high temperature industry, and anthropogenic Pb emission is expected to rise in the next decade. In this study, we report on the Pb-isotope compositions and trace metal ratios of airborne particulates collected in Kanpur, a large city in northern part of India. The study shows that the PM10 aerosols had elevated concentration of Cd, Pb, Zn, As, and Cu in the Kanpur area, however their concentrations are well below the United States Environmental Protection Agency chronic exposure limit. Lead isotopic and trace metal data reveal industrial emission as the plausible source of anthropogenic Pb in the atmosphere in Kanpur. However, Pb isotopic compositions of potential source end-members are required to fully evaluate Pb contamination in India over time. This is the first study that characterizes the isotopic composition of atmospheric Pb in an Indian city after leaded gasoline was phased out by 2000.

  10. Design of a protocol for the use of radiochromic films in IMRT plans control

    International Nuclear Information System (INIS)

    Aberbuj, P.D.

    2011-01-01

    The purpose of this work is to design a protocol for the use of the Gafchromic EBT2 radiochromic films with the Epson CX5600 scanner as dosimetric system for IMRT patient specific quality assurance, with an emphasis on keep the uncertainty below 3%. The studied uncertainty sources are related to the scanner reproducibility, the film and scanner homogeneity, and the dose estimation. By introducing a series of modifications to the initial protocol, the total uncertainty was kept below 3% in the range 30-500 c Gy, being less than 1% between 150 and 500 c Gy. (author)

  11. Multimode Communication Protocols Enabling Reconfigurable Radios

    Directory of Open Access Journals (Sweden)

    Berlemann Lars

    2005-01-01

    Full Text Available This paper focuses on the realization and application of a generic protocol stack for reconfigurable wireless communication systems. This focus extends the field of software-defined radios which usually concentrates on the physical layer. The generic protocol stack comprises common protocol functionality and behavior which are extended through specific parts of the targeted radio access technology. This paper considers parameterizable modules of basic protocol functions residing in the data link layer of the ISO/OSI model. System-specific functionality of the protocol software is realized through adequate parameterization and composition of the generic modules. The generic protocol stack allows an efficient realization of reconfigurable protocol software and enables a completely reconfigurable wireless communication system. It is a first step from side-by-side realized, preinstalled modes in a terminal towards a dynamic reconfigurable anymode terminal. The presented modules of the generic protocol stack can also be regarded as a toolbox for the accelerated and cost-efficient development of future communication protocols.

  12. Neutron source characterization for materials experiments

    International Nuclear Information System (INIS)

    Greenwood, L.R.

    1982-01-01

    Data are presented from HFIR-CTR32, EBRII-X287, and the Omega West Reactor. An important new source of damage in nickel arises from the 340 keV 56 Fe recoil from the 59 Ni(n,α) reaction used to produce high helium levels in materials irradiations in a thermal spectrum. The status of all other experiments is summarized

  13. Ancestors protocol for scalable key management

    Directory of Open Access Journals (Sweden)

    Dieter Gollmann

    2010-06-01

    Full Text Available Group key management is an important functional building block for secure multicast architecture. Thereby, it has been extensively studied in the literature. The main proposed protocol is Adaptive Clustering for Scalable Group Key Management (ASGK. According to ASGK protocol, the multicast group is divided into clusters, where each cluster consists of areas of members. Each cluster uses its own Traffic Encryption Key (TEK. These clusters are updated periodically depending on the dynamism of the members during the secure session. The modified protocol has been proposed based on ASGK with some modifications to balance the number of affected members and the encryption/decryption overhead with any number of the areas when a member joins or leaves the group. This modified protocol is called Ancestors protocol. According to Ancestors protocol, every area receives the dynamism of the members from its parents. The main objective of the modified protocol is to reduce the number of affected members during the leaving and joining members, then 1 affects n overhead would be reduced. A comparative study has been done between ASGK protocol and the modified protocol. According to the comparative results, it found that the modified protocol is always outperforming the ASGK protocol.

  14. Experimental characterization and Monte Carlo simulation of Si(Li) detector efficiency by radioactive sources and PIXE

    Energy Technology Data Exchange (ETDEWEB)

    Mesradi, M. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France); Elanique, A. [Departement de Physique, FS/BP 8106, Universite Ibn Zohr, Agadir, Maroc (Morocco); Nourreddine, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)], E-mail: abdelmjid.nourreddine@ires.in2p3.fr; Pape, A.; Raiser, D.; Sellam, A. [Institut Pluridisciplinaire Hubert-Curien, UMR 7178 CNRS/IN2P3 et Universite Louis Pasteur, 23 rue du Loess, BP 28, F-67037 Strasbourg Cedex 2 (France)

    2008-06-15

    This work relates to the study and characterization of the response function of an X-ray spectrometry system. The intrinsic efficiency of a Si(Li) detector has been simulated with the Monte Carlo codes MCNP and GEANT4 in the photon energy range of 2.6-59.5 keV. After finding it necessary to take a radiograph of the detector inside its cryostat to learn the correct dimensions, agreement within 10% between the simulations and experimental measurements with several point-like sources and PIXE results was obtained.

  15. Noble gas signatures in the Island of Maui, Hawaii: Characterizing groundwater sources in fractured systems

    Science.gov (United States)

    Niu, Yi; Castro, M. Clara; Hall, Chris M.; Gingerich, Stephen B.; Scholl, Martha A.; Warrier, Rohit B.

    2017-01-01

    Uneven distribution of rainfall and freshwater scarcity in populated areas in the Island of Maui, Hawaii, renders water resources management a challenge in this complex and ill-defined hydrological system. A previous study in the Galapagos Islands suggests that noble gas temperatures (NGTs) record seasonality in that fractured, rapid infiltration groundwater system rather than the commonly observed mean annual air temperature (MAAT) in sedimentary systems where infiltration is slower thus, providing information on recharge sources and potential flow paths. Here we report noble gas results from the basal aquifer, springs, and rainwater in Maui to explore the potential for noble gases in characterizing this type of complex fractured hydrologic systems. Most samples display a mass-dependent depletion pattern with respect to surface conditions consistent with previous observations both in the Galapagos Islands and Michigan rainwater. Basal aquifer and rainwater noble gas patterns are similar and suggest direct, fast recharge from precipitation to the basal aquifer. In contrast, multiple springs, representative of perched aquifers, display highly variable noble gas concentrations suggesting recharge from a variety of sources. The distinct noble gas patterns for the basal aquifer and springs suggest that basal and perched aquifers are separate entities. Maui rainwater displays high apparent NGTs, incompatible with surface conditions, pointing either to an origin at high altitudes with the presence of ice or an ice-like source of undetermined origin. Overall, noble gas signatures in Maui reflect the source of recharge rather than the expected altitude/temperature relationship commonly observed in sedimentary systems.

  16. An improved machine learning protocol for the identification of correct Sequest search results

    Directory of Open Access Journals (Sweden)

    Lu Hui

    2010-12-01

    Full Text Available Abstract Background Mass spectrometry has become a standard method by which the proteomic profile of cell or tissue samples is characterized. To fully take advantage of tandem mass spectrometry (MS/MS techniques in large scale protein characterization studies robust and consistent data analysis procedures are crucial. In this work we present a machine learning based protocol for the identification of correct peptide-spectrum matches from Sequest database search results, improving on previously published protocols. Results The developed model improves on published machine learning classification procedures by 6% as measured by the area under the ROC curve. Further, we show how the developed model can be presented as an interpretable tree of additive rules, thereby effectively removing the 'black-box' notion often associated with machine learning classifiers, allowing for comparison with expert rule-of-thumb. Finally, a method for extending the developed peptide identification protocol to give probabilistic estimates of the presence of a given protein is proposed and tested. Conclusions We demonstrate the construction of a high accuracy classification model for Sequest search results from MS/MS spectra obtained by using the MALDI ionization. The developed model performs well in identifying correct peptide-spectrum matches and is easily extendable to the protein identification problem. The relative ease with which additional experimental parameters can be incorporated into the classification framework, to give additional discriminatory power, allows for future tailoring of the model to take advantage of information from specific instrument set-ups.

  17. Resource use and costs of type 2 diabetes patients receiving managed or protocolized primary care: a controlled clinical trial.

    Science.gov (United States)

    van der Heijden, Amber A W A; de Bruijne, Martine C; Feenstra, Talitha L; Dekker, Jacqueline M; Baan, Caroline A; Bosmans, Judith E; Bot, Sandra D M; Donker, Gé A; Nijpels, Giel

    2014-06-25

    The increasing prevalence of diabetes is associated with increased health care use and costs. Innovations to improve the quality of care, manage the increasing demand for health care and control the growth of health care costs are needed. The aim of this study is to evaluate the care process and costs of managed, protocolized and usual care for type 2 diabetes patients from a societal perspective. In two distinct regions of the Netherlands, both managed and protocolized diabetes care were implemented. Managed care was characterized by centralized organization, coordination, responsibility and centralized annual assessment. Protocolized care had a partly centralized organizational structure. Usual care was characterized by a decentralized organizational structure. Using a quasi-experimental control group pretest-posttest design, the care process (guideline adherence) and costs were compared between managed (n = 253), protocolized (n = 197), and usual care (n = 333). We made a distinction between direct health care costs, direct non-health care costs and indirect costs. Multivariate regression models were used to estimate differences in costs adjusted for confounding factors. Because of the skewed distribution of the costs, bootstrapping methods (5000 replications) with a bias-corrected and accelerated approach were used to estimate 95% confidence intervals (CI) around the differences in costs. Compared to usual and protocolized care, in managed care more patients were treated according to diabetes guidelines. Secondary health care use was higher in patients under usual care compared to managed and protocolized care. Compared to usual care, direct costs were significantly lower in managed care (€-1.181 (95% CI: -2.597 to -334)) while indirect costs were higher (€ 758 (95% CI: -353 to 2.701), although not significant. Direct, indirect and total costs were lower in protocolized care compared to usual care (though not significantly). Compared to usual care, managed

  18. Successful characterization of radioactive waste at the Savannah River Site

    International Nuclear Information System (INIS)

    Hughes, M.B.; Miles, G.M.

    1995-01-01

    Characterization of the low-level radioactive waste generated by forty five independent operating facilities at The Savannah River Site (SRS) experienced a slow start. However, the site effectively accelerated waste characterization based on findings of an independent assessment that recommended several changes to the existing process. The new approach included the development of a generic waste characterization protocol and methodology and the formulation of a technical board to approve waste characterization. As a result, consistent, detailed characterization of waste streams from SRS facilities was achieved in six months

  19. EVA Human Health and Performance Benchmarking Study Overview and Development of a Microgravity Protocol

    Science.gov (United States)

    Norcross, Jason; Jarvis, Sarah; Bekdash, Omar; Cupples, Scott; Abercromby, Andrew

    2017-01-01

    The primary objective of this study is to develop a protocol to reliably characterize human health and performance metrics for individuals working inside various EVA suits under realistic spaceflight conditions. Expected results and methodologies developed during this study will provide the baseline benchmarking data and protocols with which future EVA suits and suit configurations (e.g., varied pressure, mass, center of gravity [CG]) and different test subject populations (e.g., deconditioned crewmembers) may be reliably assessed and compared. Results may also be used, in conjunction with subsequent testing, to inform fitness-for-duty standards, as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  20. Characterization, weathering, and application of sesquiterpanes to source identification of spilled oils

    International Nuclear Information System (INIS)

    Wang, Z.D.; Yang, C.; Fingas, M.; Hollebone, B.P.; Landriault, M.

    2005-01-01

    Sesquiterpanes are a component of crude oils and ancient sediments. This study examined the feasibility of using them as bicyclic biomarkers for fingerprinting and identifying unknown lighter petroleum product spills. The study identified and characterize sesquiterpanes in crude oils and petroleum products. The distributions of sesquiterpanes in different oils, oil distillation fractions and refined products were also studied along with the effects of evaporative weathering on the distribution and concentration of sesquiterpanes. Several diagnostic indexes of sesquiterpanes were developed for oil correlation and differentiation. Most high-molecular weight biomarkers are removed from lighter petroleum products during the refining process. Therefore, high boiling point pentacyclic triterpanes and steranes are often absent in lighter petroleum products. However, the smaller bicyclic sesquiterpanes such as drimane and eudesmane are highly concentrated in petroleum products such as light gas oil. Gas chromatography and mass spectrometry analysis of these bicyclic biomarkers can be used to correlate, differentiate and identify the source for lighter petroleum products. 15 refs., 2 tabs., 9 figs

  1. Epistemic Protocols for Distributed Gossiping

    Directory of Open Access Journals (Sweden)

    Krzysztof R. Apt

    2016-06-01

    Full Text Available Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other's secrets. We consider distributed gossip protocols which are expressed by means of epistemic logic. We provide an operational semantics of such protocols and set up an appropriate framework to argue about their correctness. Then we analyze specific protocols for complete graphs and for directed rings.

  2. The Kyoto Protocol Emissions Trading Mechanisms - A Model for financing future nuclear development in Romania

    International Nuclear Information System (INIS)

    Purica, Ionut; John Saroudis

    2001-01-01

    At the beginning of 2001 Romania ratified the Kyoto Protocol (Law 3/2001) thus becoming the first European country to do so. The mechanisms of the Kyoto Protocol are now opening new ways to sponsor the financing of nuclear projects. In May 2001 Societatea Nationala Nuclearoelectrica S.S. (SNN) and Atomic Energy of Canada Limited and ANSALDO of Italy signed a contract to complete the second CANDU unit at Cernavoda thus giving a new momentum to the nuclear program in Romania. The Government of Romania has indicated its desire to proceed with the completion of the other units on the Cernavoda site and is open to explore every potential financing mechanism to make this a reality. Although the Kyoto Protocol was not ratified by those countries that have the greatest need to reduce emissions, a market for emissions trading has developed, Canada being one of the important players in this market. Since the emission reduction per dollar invested in the Romanian nuclear program would bring much more reduction than the marginal reduction per dollar invested in environmental protection programs in Canada, where the saturation effect is already taking place, we consider that the application of the Kyoto Protocol mechanisms represents a realistic source for a sustainable cooperation of the two countries. This trend is in line with the latest activities of the International Atomic Energy Agency (IAEA). This paper analyzes the impact that the use of emissions credits would have on a typical financing scheme for a future CANDU project in Romania given the present situation and also proposes a model for the structure of the emissions trade that would generate a source of funding for the project. The conclusion is that there is real potential in using Kyoto Protocol mechanisms for financing nuclear development with benefits for both Romania and Canada. (authors)

  3. SIMULASI INTERKONEKSI ANTARA AUTONOMOUS SYSTEM (AS MENGGUNAKAN BORDER GATEWAY PROTOCOL (BGP

    Directory of Open Access Journals (Sweden)

    Hari Antoni Musril

    2017-09-01

    Full Text Available An autonomous system (AS is the collection of networks having the same set of routing policies. Each AS has administrative control to its own inter-domain routing policy. Computer networks consisting of a bunch of AS's with different routing will not be able to interconnecttion one another. This is causes communication in the network to be inhibited. For that we need a protocol that can connect each different AS. Border Gateway Protocol (BGP is an inter-domain routing protocol i.e. between different AS  that is used to exchange routing information between them. In a typical inter-network (and in the Internet each autonomous system designates one or more routers that run BGP software. BGP routers in each AS are linked to those in one or more other AS. The ability to exchange table routing information between Autonomous System (AS is one of the advantages BGP. BGP implements routing policies based a set of attributes accompanying each route used to pick the “shortest” path across multiple ASs, along with one or more routing policies. BGP uses an algorithm which cannot be classified as a pure "Distance Vector", or pure "Link State". It is a path vector routing protocol as it defines a route as a collection of a number of AS that is passes through from source AS to destination AS. This paper discusses the implementation of the BGP routing protocol in the network that have different AS in order to interconnect. Its application using Packet Tracer 7.0 software for prototyping and simulating network. So that later can be applied to the actual network. Based on experiments that have been carried out, the BGP routing protocol can connect two routers that have different autonomous system.

  4. The Chandra Source Catalog : Automated Source Correlation

    Science.gov (United States)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  5. Using semantics for representing experimental protocols.

    Science.gov (United States)

    Giraldo, Olga; García, Alexander; López, Federico; Corcho, Oscar

    2017-11-13

    An experimental protocol is a sequence of tasks and operations executed to perform experimental research in biological and biomedical areas, e.g. biology, genetics, immunology, neurosciences, virology. Protocols often include references to equipment, reagents, descriptions of critical steps, troubleshooting and tips, as well as any other information that researchers deem important for facilitating the reusability of the protocol. Although experimental protocols are central to reproducibility, the descriptions are often cursory. There is the need for a unified framework with respect to the syntactic structure and the semantics for representing experimental protocols. In this paper we present "SMART Protocols ontology", an ontology for representing experimental protocols. Our ontology represents the protocol as a workflow with domain specific knowledge embedded within a document. We also present the S ample I nstrument R eagent O bjective (SIRO) model, which represents the minimal common information shared across experimental protocols. SIRO was conceived in the same realm as the Patient Intervention Comparison Outcome (PICO) model that supports search, retrieval and classification purposes in evidence based medicine. We evaluate our approach against a set of competency questions modeled as SPARQL queries and processed against a set of published and unpublished protocols modeled with the SP Ontology and the SIRO model. Our approach makes it possible to answer queries such as Which protocols use tumor tissue as a sample. Improving reporting structures for experimental protocols requires collective efforts from authors, peer reviewers, editors and funding bodies. The SP Ontology is a contribution towards this goal. We build upon previous experiences and bringing together the view of researchers managing protocols in their laboratory work. Website: https://smartprotocols.github.io/ .

  6. Characterization and treatment of water used for human consumption from six sources located in the Cameron/Tuba City abandoned uranium mining area.

    Science.gov (United States)

    Orescanin, Visnja; Kollar, Robert; Nad, Karlo; Mikelic, Ivanka Lovrencic; Kollar, Iris

    2011-01-01

    The purpose of this research was the characterization and improvement of the quality of water used for human consumption of unregulated/regulated water sources located in the Cameron/Tuba City abandoned uranium mining area (NE Arizona, western edge of the Navajo Nation). Samples were collected at six water sources which included regulated sources: Wind Mill (Tank 3T-538), Badger Springs and Paddock Well as well as unregulated sources: Willy Spring, Water Wall and Water Hole. Samples taken from Wind Mill, Water Wall and Water Hole were characterized with high turbidity and color as well as high level of manganese, iron and nickel and elevated value of molybdenum. High level of iron was also found in Badger Spring, Willy Spring, and Paddock Well. These three water sources were also characterized with elevated values of fluoride and vanadium. Significant amounts of zinc were found in Water Wall and Water Hole samples. Water Wall sample was also characterized with high level of Cr(VI). Compared to primary or secondary Navajo Nation Environmental Protection Agency (NNEPA) water quality standard the highest enrichment was found for turbidity (50.000 times), color (up to 1.796 times) and manganese (71 times), Cr(VI) (17.5 times), iron (7.4 times) and arsenic (5.2 times). Activities of (226)Ra and (238)U in water samples were still in agreement with the maximum contaminant levels. In order to comply with NNEPA water quality standard water samples were subjected to electrochemical treatment. This method was selected due to its high removal efficiency for heavy metals and uranium, lower settlement time, production of smaller volume of waste mud and higher stability of waste mud compared to physico-chemical treatment. Following the treatment, concentrations of heavy metals and activities of radionuclides in all samples were significantly lower compared to NNEPA or WHO regulated values. The maximum removal efficiencies for color, turbidity, arsenic, manganese, molybdenum and

  7. The Comparative Study Some of Reactive and Proactive Routing Protocols in The Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Anas Ali Hussien

    2018-02-01

    Full Text Available The wireless sensor network (WSN consists mostly of a large number of nodes in a large area where not all nodes are directly connected. The applications of comprise a wide variety of scenarios.The mobile nodes are free to move because this network has selfــstructured topology. Routing protocols are responsible for detecting and maintaining paths in the network, and it classified into reactive (OnـــDemand, proactive (Table driven, and hybrid. In this paper represents a performance study of some WSN routing protocols: the Dynamic Source Routing (DSR, Ad hoc On-Demand Distance Vector (AODV, and Destination-Sequenced Distance-Vector (DSDV. The comparison made according to important metrics like packet delivery ratio (PDR, total packets dropped, Average end-to-end delay (Avg EED, and normalized routing load under the Transmission Control Protocol (TCP and User Datagram Protocol (UDP traffic connection and with varying number of nodes, pause time; and  varying speed. In this work used (NSــ2.35 that installed on (Ubuntu 14.04 operating system to implementing the scenario. Conclude that the DSR has better performance in TCP connection; while the DSDV has better performance in UDP protocol.

  8. Preparation, physical characterization, and stability of Ferrous-Chitosan microcapsules using different iron sources

    Science.gov (United States)

    Handayani, Noer Abyor; Luthfansyah, M.; Krisanti, Elsa; Kartohardjono, Sutrasno; Mulia, Kamarza

    2017-11-01

    Dietary modification, supplementation and food fortification are common strategies to alleviate iron deficiencies. Fortification of food is an effective long-term approach to improve iron status of populations. Fortification by adding iron directly to food will cause sensory problems and decrease its bioavailability. The purpose of iron encapsulation is: (1) to improve iron bioavailability, by preventing oxidation and contact with inhibitors and competitors; and (2) to disguise the rancid aroma and flavor of iron. A microcapsule formulation of two suitable iron compounds (iron II fumarate and iron II gluconate) using chitosan as a biodegradable polymer will be very important. Freeze dryer was also used for completing the iron microencapsulation process. The main objective of the present study was to prepare and characterize the iron-chitosan microcapsules. Physical characterization, i.e. encapsulation efficiency, iron loading capacity, and SEM, were also discussed in this paper. The stability of microencapsulated iron under simulated gastrointestinal conditions was also investigated, as well. Both iron sources were highly encapsulated, ranging from 71.5% to 98.5%. Furthermore, the highest ferrous fumarate and ferrous gluconate loaded were 1.9% and 4.8%, respectively. About 1.04% to 9.17% and 45.17% to 75.19% of Fe II and total Fe, were released in simulated gastric fluid for two hours and in simulated intestinal fluid for six hours, respectively.

  9. Field protocols for the genomic era

    Directory of Open Access Journals (Sweden)

    N Bulatova

    2009-08-01

    Full Text Available For many decades karyotype was the only source of overall genomic information obtained from species of mammal. However, approaches have been developed in recent years to obtain molecular and ultimately genomic information directly from the extracted DNA of an organism. Molecular data have accumulated hugely for mammalian taxa. The growing volume of studies should motivate field researchers to collect suitable samples for molecular analysis from various species across all their ranges. This is the reason why we here include a molecular sampling procedure within a field work protocol, which also includes more traditional (including cytogenetic techniques. In this way we hope to foster the development of molecular and genomic studies in non-standard wild mammals.

  10. Laser and beta source setup characterization of 3D-DDTC detectors fabricated at FBK-irst

    Energy Technology Data Exchange (ETDEWEB)

    Zoboli, A. [INFN, Sezione di Padova (Gruppo Collegato di Trento), and Dipartimento di Ingegneria e Scienza dell' Informazione, Universita di Trento, Via Sommarive, 14, I-38050 Povo (Trento) (Italy)], E-mail: zoboli@disi.unitn.it; Dalla Betta, G.-F. [INFN, Sezione di Padova (Gruppo Collegato di Trento), and Dipartimento di Ingegneria e Scienza dell' Informazione, Universita di Trento, Via Sommarive, 14, I-38050 Povo (Trento) (Italy); Boscardin, M. [Fondazione Bruno Kessler, Centro per i Materiali e i Microsistemi, Via Sommarive, 18, I-38050 Povo (Trento) (Italy); Bosisio, L. [Dip. di Fisica e INFN, Universita di Trieste, I-34127, Trieste (Italy); Eckert, S.; Kuehn, S.; Parzefall, U. [Institute of Physics, University of Freiburg, Hermann-Herder-Str. 3, 79104 Freiburg (Germany); Piemonte, C.; Ronchin, S.; Zorzi, N. [Fondazione Bruno Kessler, Centro per i Materiali e i Microsistemi, Via Sommarive, 18, I-38050 Povo (Trento) (Italy)

    2009-06-01

    We report on the functional characterization of the first batch of 3D Double-Sided Double Type Column (3D-DDTC) detectors fabricated at FBK, Trento. This detector concept represents the evolution of the previous 3D-STC detectors towards full 3D detectors, and is expected to achieve a performance which is comparable to standard 3D detectors, but with a simpler fabrication process. Measurements were performed on detectors in the microstrip configuration coupled to the ATLAS ABCD3T binary readout. This paper reports spatially resolved signal efficiency tests made with a pulsed infrared laser setup and charge collection efficiency tests made with a Beta source.

  11. Laser and beta source setup characterization of 3D-DDTC detectors fabricated at FBK-irst

    International Nuclear Information System (INIS)

    Zoboli, A.; Dalla Betta, G.-F.; Boscardin, M.; Bosisio, L.; Eckert, S.; Kuehn, S.; Parzefall, U.; Piemonte, C.; Ronchin, S.; Zorzi, N.

    2009-01-01

    We report on the functional characterization of the first batch of 3D Double-Sided Double Type Column (3D-DDTC) detectors fabricated at FBK, Trento. This detector concept represents the evolution of the previous 3D-STC detectors towards full 3D detectors, and is expected to achieve a performance which is comparable to standard 3D detectors, but with a simpler fabrication process. Measurements were performed on detectors in the microstrip configuration coupled to the ATLAS ABCD3T binary readout. This paper reports spatially resolved signal efficiency tests made with a pulsed infrared laser setup and charge collection efficiency tests made with a Beta source.

  12. Marine sponges: a potential source of eco-friendly antifouling compounds

    Digital Repository Service at National Institute of Oceanography (India)

    Wagh, A.B.; Thakur, N.L.; Anil, A.C.; Venkat, K.

    biocides have environmental concerns. In view of this search for ecofriendly antifouling protocols gained momentum. Sourcing of such antifouling compounds has often been explored with marine organism. This paper reviews the efforts in this domain...

  13. STRESS TESTS FOR VIDEOSTREAMING SERVICES BASED ON RTSP PROTOCOL

    Directory of Open Access Journals (Sweden)

    Gabriel Elías Chanchí Golondrino

    2015-11-01

    Full Text Available Video-streaming is a technology with major implications these days in such diverse contexts as education, health and the business sector; all of this regarding the ease it provides for remote access to live or recorded media content, allowing communication regardless of geographic location. One standard protocol that enables implementation of this technology is real time streaming protocol, or RTSP. However, since most application servers and Internet services are supported on HTTP requests, very little research has been done on generating tools for carrying out stress tests on streaming servers. This paper presents a stress measuring tool called Hermes, developed in Python, which allows calculation of response times for establishing RTSP connections to streaming servers, as well as obtaining RAM memory consumption and CPU usage rate data from these servers. Hermes was deployed in a video-streaming environment where stress testing was carried out on the LIVE555 server, using calls in the background to VLC and OpenRTSP open source clients. 

  14. Understanding protocol performance: impact of test performance.

    Science.gov (United States)

    Turner, Robert G

    2013-01-01

    This is the second of two articles that examine the factors that determine protocol performance. The objective of these articles is to provide a general understanding of protocol performance that can be used to estimate performance, establish limits on performance, decide if a protocol is justified, and ultimately select a protocol. The first article was concerned with protocol criterion and test correlation. It demonstrated the advantages and disadvantages of different criterion when all tests had the same performance. It also examined the impact of increasing test correlation on protocol performance and the characteristics of the different criteria. To examine the impact on protocol performance when individual tests in a protocol have different performance. This is evaluated for different criteria and test correlations. The results of the two articles are combined and summarized. A mathematical model is used to calculate protocol performance for different protocol criteria and test correlations when there are small to large variations in the performance of individual tests in the protocol. The performance of the individual tests that make up a protocol has a significant impact on the performance of the protocol. As expected, the better the performance of the individual tests, the better the performance of the protocol. Many of the characteristics of the different criteria are relatively independent of the variation in the performance of the individual tests. However, increasing test variation degrades some criteria advantages and causes a new disadvantage to appear. This negative impact increases as test variation increases and as more tests are added to the protocol. Best protocol performance is obtained when individual tests are uncorrelated and have the same performance. In general, the greater the variation in the performance of tests in the protocol, the more detrimental this variation is to protocol performance. Since this negative impact is increased as

  15. Preparation And Characterization Of Modified Calcium Oxide From Natural Sources And Their Application In The Transesterification Of Palm Oil

    Directory of Open Access Journals (Sweden)

    Aqliliriana

    2015-08-01

    Full Text Available Abstract Calcium oxide catalysts were prepared from natural calcium sources such as limestone and mud creeper shell and the catalytic activities were evaluated in the transesterification of palm oil. The raw material which mainly composed of calcium carbonate can be easily converted to calcium oxide CaO after calcination above 1000 K for few hours. Abundant cheap sources benign high conversion and nontoxic become main advantages of these catalysts. The catalysts were characterized by XRF TGA XRD CO2-TPD SEM and BET methods. Thermal decomposition of CaCO3 will produced CaO which later will be converted into calcium hydroxide CaOH2 via simple hydration technique. Under optimum reaction condition methanol to oil ratio 151 catalyst loading 3 wt. reaction temperature 338 K for 5 hours the highest conversion of palm oil to methyl ester recorded were 98 and 94 when using modified limestone and mud creeper shell respectively. The results observed an increment up to 80 by using modified catalysts with characterization results showed high in basicity and surface area. Hence promising materials via simple and cheap method can be achieved.

  16. Static Validation of Security Protocols

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, P.

    2005-01-01

    We methodically expand protocol narrations into terms of a process algebra in order to specify some of the checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we demonstrate that these techniques ...... suffice to identify several authentication flaws in symmetric and asymmetric key protocols such as Needham-Schroeder symmetric key, Otway-Rees, Yahalom, Andrew secure RPC, Needham-Schroeder asymmetric key, and Beller-Chang-Yacobi MSR...

  17. Characterization of continuous and pulsed emission modes of a hybrid micro focus x-ray source for medical imaging applications

    International Nuclear Information System (INIS)

    Ghani, Muhammad U.; Wong, Molly D.; Ren, Liqiang; Wu, Di; Zheng, Bin; Rong, John X.; Wu, Xizeng; Liu, Hong

    2017-01-01

    The aim of this study was to quantitatively characterize a micro focus x-ray tube that can operate in both continuous and pulsed emission modes. The micro focus x-ray source (Model L9181-06, Hamamatsu Photonics, Japan) has a varying focal spot size ranging from 16 µm to 50 µm as the source output power changes from 10 to 39 W. We measured the source output, beam quality, focal spot sizes, kV accuracy, spectra shapes and spatial resolution. Source output was measured using an ionization chamber for various tube voltages (kVs) with varying current (µA) and distances. The beam quality was measured in terms of half value layer (HVL), kV accuracy was measured with a non-invasive kV meter, and the spectra was measured using a compact integrated spectrometer system. The focal spot sizes were measured using a slit method with a CCD detector with a pixel pitch of 22 µm. The spatial resolution was quantitatively measured using the slit method with a CMOS flat panel detector with a 50 µm pixel pitch, and compared to the qualitative results obtained by imaging a contrast bar pattern. The focal spot sizes in the vertical direction were smaller than that of the horizontal direction, the impact of which was visible when comparing the spatial resolution values. Our analyses revealed that both emission modes yield comparable imaging performances in terms of beam quality, spectra shape and spatial resolution effects. There were no significantly large differences, thus providing the motivation for future studies to design and develop stable and robust cone beam imaging systems for various diagnostic applications. - Highlights: • A micro focus x-ray source that operates in both continuous and pulse emission modes was quantitatively characterized. • The source output, beam quality, focal spot measurements, kV accuracy, spectra analyses and spatial resolution were measured. • Our analyses revealed that both emission modes yield comparable imaging performances in terms of beam

  18. Characterization of continuous and pulsed emission modes of a hybrid micro focus x-ray source for medical imaging applications

    Energy Technology Data Exchange (ETDEWEB)

    Ghani, Muhammad U.; Wong, Molly D.; Ren, Liqiang; Wu, Di; Zheng, Bin [Center for Biomedical Engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States); Rong, John X. [Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX 77030 (United States); Wu, Xizeng [Department of Radiology, University of Alabama at Birmingham, Birmingham, AL 35249 (United States); Liu, Hong, E-mail: liu@ou.edu [Center for Biomedical Engineering and School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019 (United States)

    2017-05-01

    The aim of this study was to quantitatively characterize a micro focus x-ray tube that can operate in both continuous and pulsed emission modes. The micro focus x-ray source (Model L9181-06, Hamamatsu Photonics, Japan) has a varying focal spot size ranging from 16 µm to 50 µm as the source output power changes from 10 to 39 W. We measured the source output, beam quality, focal spot sizes, kV accuracy, spectra shapes and spatial resolution. Source output was measured using an ionization chamber for various tube voltages (kVs) with varying current (µA) and distances. The beam quality was measured in terms of half value layer (HVL), kV accuracy was measured with a non-invasive kV meter, and the spectra was measured using a compact integrated spectrometer system. The focal spot sizes were measured using a slit method with a CCD detector with a pixel pitch of 22 µm. The spatial resolution was quantitatively measured using the slit method with a CMOS flat panel detector with a 50 µm pixel pitch, and compared to the qualitative results obtained by imaging a contrast bar pattern. The focal spot sizes in the vertical direction were smaller than that of the horizontal direction, the impact of which was visible when comparing the spatial resolution values. Our analyses revealed that both emission modes yield comparable imaging performances in terms of beam quality, spectra shape and spatial resolution effects. There were no significantly large differences, thus providing the motivation for future studies to design and develop stable and robust cone beam imaging systems for various diagnostic applications. - Highlights: • A micro focus x-ray source that operates in both continuous and pulse emission modes was quantitatively characterized. • The source output, beam quality, focal spot measurements, kV accuracy, spectra analyses and spatial resolution were measured. • Our analyses revealed that both emission modes yield comparable imaging performances in terms of beam

  19. Genetic characterization of Arcobacter isolates from various sources.

    Science.gov (United States)

    Shah, A H; Saleha, A A; Zunita, Z; Cheah, Y K; Murugaiyah, M; Korejo, N A

    2012-12-07

    Arcobacter is getting more attention due to its detection from wide host-range and foods of animal origin. The objective of this study was to determine the prevalence of Arcobacter spp. in various sources at farm level and beef retailed in markets in Malaysia and to assess the genetic relatedness among them. A total of 273 samples from dairy cattle including cattle (n=120), floor (n=30), water (n=18) and milk (n=105) as well as 148 beef samples collected from retail markets were studied. The overall prevalence of Arcobacter in various sources was 15% (63/421). However, source-wise detection rate of Arcobacter spp. was recorded as 26.66% (8/30) in floor, 26.3% (39/148) in beef, 11.11% (2/18) in water, 7.6% (8/105) in milk and 6.66% (8/120) in cattle. Arcobacter butzleri was the frequently isolated species however, a total of 75%, 66.7%, 53.8%, 50% and 12.5%% samples from floor, milk, beef, water and cattle, respectively, were carrying more than one species simultaneously. One (12.5%) cattle and beef sample (2.5%) found to be carrying one Arcobacter spp., A. skirrowii, only. Typing of Arcobacter isolates was done though pulsed field gel electrophoresis (PFGE) after digested with Eag1 restriction endonuclease (RE). Digestion of genomic DNA of Arcobacter from various sources yielded 12 major clusters (≥ 50% similarity) which included 29 different band patterns. A number of closely related A. butzleri isolates were found from beef samples which indicate cross contamination of common type of Arcobacter. Fecal shedding of Arcobacter by healthy animals can contaminate water and milk which may act as source of infection in humans. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A class-chest for deriving transport protocols

    Energy Technology Data Exchange (ETDEWEB)

    Strayer, W.T.

    1996-10-01

    Development of new transport protocols or protocol algorithms suffers from the complexity of the environment in which they are intended to run. Modeling techniques attempt to avoid this by simulating the environment. Another approach to promoting rapid prototyping of protocols and protocol algorithms is to provide a pre-built infrastructure that is common to transport protocols, so that the focus is placed on the protocol-specific aspects. The Meta-Transport Library is a library of C++ base classes that implement or abstract out the mundane functions of a protocol, new protocol implementations are derived from base classes. The result is a fully viable user- level transport protocol implementation, with emphasis on modularity. The collection of base classes form a ``class-chest`` of tools .from which protocols can be developed and studied with as little change to a normal UNIX environment as possible.

  1. Effective dose comparison between protocols stitched and usual protocols in dental cone beam CT for complete arcade

    International Nuclear Information System (INIS)

    Soares, M. R.; Maia, A. F.; Batista, W. O. G.; Lara, P. A.

    2014-08-01

    To visualization a complete dental radiology dental lives together with two separate proposals: [1] protocols diameter encompassing the entire arch (single) or [2] protocol with multiple fields of view (Fov) which together encompass the entire arch (stitched Fov s). The objective of this study is to evaluate effective dose values in examination protocols for all dental arcade available in different outfits with these two options. For this, a female anthropomorphic phantom manufactured by Radiology Support Devices twenty six thermoluminescent dosimeters inserted in relevant bodies and positions was used. Irradiate the simulator in the clinical conditions. The protocols were averaged and compared: [a] 14.0 cm x 8.5 cm and [b] 8.5 cm x 8.5 cm (Gendex Tomography GXCB 500), [c] protocol stitched for jaw combination of three volumes of 5.0 cm x 3.7 cm (Kodak 9000 3D scanner) [d] protocol stitched Fov s 5.0 cm x 8.0 cm (Planmeca Pro Max 3D) and [e] single technical Fov 14 cm x 8 cm (i-CAT Classical). Our results for the effective dose were: a range between 43.1 and 111.1 micro Sv for technical single Fov and 44.5 and 236.2 for technical stitched Fov s. The protocol presented the highest estimated effective dose was [d] and showed that lowest index was registered [a]. These results demonstrate that the protocol stitched Fov generated in Kodak 9000 3D machine applied the upper dental arch has practically equal value effective dose obtained by protocol extended diameter of, [a], which evaluates in a single image upper and lower arcade. It also demonstrates that the protocol [d] gives an estimate of five times higher than the protocol [a]. Thus, we conclude that in practical terms the protocol [c] stitched Fov s, not presents dosimetric advantages over other protocols. (Author)

  2. Effective dose comparison between protocols stitched and usual protocols in dental cone beam CT for complete arcade

    Energy Technology Data Exchange (ETDEWEB)

    Soares, M. R.; Maia, A. F. [Universidade Federal de Sergipe, Departamento de Fisica, Cidade Universitaria Prof. Jose Aloisio de Campos, Marechal Rondon s/n, Jardim Rosa Elze, 49-100000 Sao Cristovao, Sergipe (Brazil); Batista, W. O. G. [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho, Salvador, 40301015 Bahia (Brazil); Lara, P. A., E-mail: wilsonottobatista@gmail.com [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    To visualization a complete dental radiology dental lives together with two separate proposals: [1] protocols diameter encompassing the entire arch (single) or [2] protocol with multiple fields of view (Fov) which together encompass the entire arch (stitched Fov s). The objective of this study is to evaluate effective dose values in examination protocols for all dental arcade available in different outfits with these two options. For this, a female anthropomorphic phantom manufactured by Radiology Support Devices twenty six thermoluminescent dosimeters inserted in relevant bodies and positions was used. Irradiate the simulator in the clinical conditions. The protocols were averaged and compared: [a] 14.0 cm x 8.5 cm and [b] 8.5 cm x 8.5 cm (Gendex Tomography GXCB 500), [c] protocol stitched for jaw combination of three volumes of 5.0 cm x 3.7 cm (Kodak 9000 3D scanner) [d] protocol stitched Fov s 5.0 cm x 8.0 cm (Planmeca Pro Max 3D) and [e] single technical Fov 14 cm x 8 cm (i-CAT Classical). Our results for the effective dose were: a range between 43.1 and 111.1 micro Sv for technical single Fov and 44.5 and 236.2 for technical stitched Fov s. The protocol presented the highest estimated effective dose was [d] and showed that lowest index was registered [a]. These results demonstrate that the protocol stitched Fov generated in Kodak 9000 3D machine applied the upper dental arch has practically equal value effective dose obtained by protocol extended diameter of, [a], which evaluates in a single image upper and lower arcade. It also demonstrates that the protocol [d] gives an estimate of five times higher than the protocol [a]. Thus, we conclude that in practical terms the protocol [c] stitched Fov s, not presents dosimetric advantages over other protocols. (Author)

  3. A Catalog of Moment Tensors and Source-type Characterization for Small Events at Uturuncu Volcano, Bolivia

    Science.gov (United States)

    Alvizuri, C. R.; Tape, C.

    2015-12-01

    We present a catalog of full seismic moment tensors for 63 events from Uturuncu volcano in Bolivia. The events were recorded during 2011-2012 in the PLUTONS seismic array of 24 broadband stations. Most events had magnitudes between 0.5 and 2.0 and did not generate discernible surface waves; the largest event was Mw 2.8. For each event we computed the misfit between observed and synthetic waveforms, and we also used first-motion polarity measurements to reduce the number of possible solutions. Each moment tensor solution was obtained using a grid search over the six-dimensional space of moment tensors. For each event we characterize the variation of moment tensor source type by plotting the misfit function in eigenvalue space, represented by a lune. We plot the optimal solutions for the 63 events on the lune in order to identify three subsets of the catalog: (1) a set of isotropic events, (2) a set of tensional crack events, and (3) a swarm of events southeast of the volcanic center that appear to be double couples. The occurrence of positively isotropic events is consistent with other published results from volcanic and geothermal regions. Several of these previous results, as well as our results, cannot be interpreted within the context of either an oblique opening crack or a crack-plus-double-couple model; instead they require a multiple-process source model. Our study emphasizes the importance of characterizing uncertainties for full moment tensors, and it provides strong support for isotropic events at Uturuncu volcano.

  4. Automatic Validation of Protocol Narration

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Degano, Pierpablo

    2003-01-01

    We perform a systematic expansion of protocol narrations into terms of a process algebra in order to make precise some of the detailed checks that need to be made in a protocol. We then apply static analysis technology to develop an automatic validation procedure for protocols. Finally, we...

  5. Efficient secure two-party protocols

    CERN Document Server

    Hazay, Carmit

    2010-01-01

    The authors present a comprehensive study of efficient protocols and techniques for secure two-party computation -- both general constructions that can be used to securely compute any functionality, and protocols for specific problems of interest. The book focuses on techniques for constructing efficient protocols and proving them secure. In addition, the authors study different definitional paradigms and compare the efficiency of protocols achieved under these different definitions.The book opens with a general introduction to secure computation and then presents definitions of security for a

  6. Using Ovsynch protocol versus Cosynch protocol in dairy cows

    Directory of Open Access Journals (Sweden)

    Ion Valeriu Caraba

    2013-10-01

    Full Text Available As a research on the reproductive physiology and endocrinology surrounding the estrous cycle in dairy cattle has been compiled, several estrous synchronization programs have been developed for use with dairy cows. These include several programs that facilitate the mass breeding of all animals at a predetermined time (timed-AI rather than the detection of estrus. We studied on 15 dary cows which were synchronized by Ovsynch and Cosynch programs. The estrus response for cows in Ovsynch protocol was of 63%. Pregnancy per insemination at 60 days was of 25%. Estrus response for cow in Cosynch protocol was of 57%. Pregnancy per insemination at 60 days was of 57%. Synchronization of ovulation using Ovsynch protocols can provide an effective way to manage reproduction in lactating dairy cows by eliminating the need for estrus detection. These are really efficient management programs for TAI of dairy cows that are able to reduce both the labour costs and the extra handling to daily estrus detection and AI.

  7. The CCSDS Next Generation Space Data Link Protocol (NGSLP)

    Science.gov (United States)

    Kazz, Greg J.; Greenberg, Edward

    2014-01-01

    The CCSDS space link protocols i.e., Telemetry (TM), Telecommand (TC), Advanced Orbiting Systems (AOS) were developed in the early growth period of the space program. They were designed to meet the needs of the early missions, be compatible with the available technology and focused on the specific link environments. Digital technology was in its infancy and spacecraft power and mass issues enforced severe constraints on flight implementations. Therefore the Telecommand protocol was designed around a simple Bose, Hocquenghem, Chaudhuri (BCH) code that provided little coding gain and limited error detection but was relatively simple to decode on board. The infusion of the concatenated Convolutional and Reed-Solomon codes5 for telemetry was a major milestone and transformed telemetry applications by providing them the ability to more efficiently utilize the telemetry link and its ability to deliver user data. The ability to significantly lower the error rates on the telemetry links enabled the use of packet telemetry and data compression. The infusion of the high performance codes for telemetry was enabled by the advent of digital processing, but it was limited to earth based systems supporting telemetry. The latest CCSDS space link protocol, Proximity-1 was developed in early 2000 to meet the needs of short-range, bi-directional, fixed or mobile radio links characterized by short time delays, moderate but not weak signals, and short independent sessions. Proximity-1 has been successfully deployed on both NASA and ESA missions at Mars and is planned to be utilized by all Mars missions in development. A new age has arisen, one that now provides the means to perform advanced digital processing in spacecraft systems enabling the use of improved transponders, digital correlators, and high performance forward error correcting codes for all communications links. Flight transponders utilizing digital technology have emerged and can efficiently provide the means to make the

  8. Chemical characterization of atmospheric particles and source apportionment in the vicinity of a steelmaking industry

    International Nuclear Information System (INIS)

    Almeida, S.M.; Lage, J.; Fernández, B.; Garcia, S.; Reis, M.A.; Chaves, P.C.

    2015-01-01

    The objective of this work was to provide a chemical characterization of atmospheric particles collected in the vicinity of a steelmaking industry and to identify the sources that affect PM 10 levels. A total of 94 PM samples were collected in two sampling campaigns that occurred in February and June/July of 2011. PM 2.5 and PM 2.5–10 were analyzed for a total of 22 elements by Instrumental Neutron Activation Analysis and Particle Induced X-ray Emission. The concentrations of water soluble ions in PM 10 were measured by Ion Chromatography and Indophenol-Blue Spectrophotometry. Positive Matrix Factorization receptor model was used to identify sources of particulate matter and to determine their mass contribution to PM 10 . Seven main groups of sources were identified: marine aerosol identified by Na and Cl (22%), steelmaking and sinter plant represented by As, Cr, Cu, Fe, Ni, Mn, Pb, Sb and Zn (11%), sinter plant stack identified by NH 4 + , K and Pb (12%), an unidentified Br source (1.8%), secondary aerosol from coke making and blast furnace (19%), fugitive emissions from the handling of raw material, sinter plant and vehicles dust resuspension identified by Al, Ca, La, Si, Ti and V (14%) and sinter plant and blast furnace associated essentially with Fe and Mn (21%). - Highlights: • Emissions from steelworks are very complex. • The larger steelworks contribution to PM 10 was from blast furnace and sinter plant. • Sinter plant stack emissions contributed for 12% of the PM 10 mass. • Secondary aerosol from coke making and blast furnace contributed for 19% of the PM 10 . • Fugitive dust emissions highly contribute to PM 10 mass

  9. Analysis of agreement between cardiac risk stratification protocols applied to participants of a center for cardiac rehabilitation

    Directory of Open Access Journals (Sweden)

    Ana A. S. Santos

    2016-01-01

    Full Text Available ABSTRACT Background Cardiac risk stratification is related to the risk of the occurrence of events induced by exercise. Despite the existence of several protocols to calculate risk stratification, studies indicating that there is similarity between these protocols are still unknown. Objective To evaluate the agreement between the existing protocols on cardiac risk rating in cardiac patients. Method The records of 50 patients from a cardiac rehabilitation program were analyzed, from which the following information was extracted: age, sex, weight, height, clinical diagnosis, medical history, risk factors, associated diseases, and the results from the most recent laboratory and complementary tests performed. This information was used for risk stratification of the patients in the protocols of the American College of Sports Medicine, the Brazilian Society of Cardiology, the American Heart Association, the protocol designed by Frederic J. Pashkow, the American Association of Cardiovascular and Pulmonary Rehabilitation, the Société Française de Cardiologie, and the Sociedad Española de Cardiología. Descriptive statistics were used to characterize the sample and the analysis of agreement between the protocols was calculated using the Kappa coefficient. Differences were considered with a significance level of 5%. Results Of the 21 analyses of agreement, 12 were considered significant between the protocols used for risk classification, with nine classified as moderate and three as low. No agreements were classified as excellent. Different proportions were observed in each risk category, with significant differences between the protocols for all risk categories. Conclusion The agreements between the protocols were considered low and moderate and the risk proportions differed between protocols.

  10. Overview of waste isoltaion safety assessment program and description of source term characterization task at PNL

    International Nuclear Information System (INIS)

    Bradley, D.

    1977-01-01

    A project is being conducted to develop and illustrate the methods and obtain the data necessary to assess the safety of long-term disposal of high-level radioactive waste in geologic formations. The methods and data will initially focus on generic geologic isolation systems but will ultimately be applied to the long-term safety assessment of specific candidate sites that are selected in the NWTS Program. The activities of waste isolation safety assessment (WISAP) are divided into six tasks: (1) Safety Assessment Concepts and Methods, (2) Disruptive Event Analysis, (3) Source Characterization, (4) Transport Modeling, (5) Transport Data and (6) Societal Acceptance

  11. Development of a data entry auditing protocol and quality assurance for a tissue bank database.

    Science.gov (United States)

    Khushi, Matloob; Carpenter, Jane E; Balleine, Rosemary L; Clarke, Christine L

    2012-03-01

    Human transcription error is an acknowledged risk when extracting information from paper records for entry into a database. For a tissue bank, it is critical that accurate data are provided to researchers with approved access to tissue bank material. The challenges of tissue bank data collection include manual extraction of data from complex medical reports that are accessed from a number of sources and that differ in style and layout. As a quality assurance measure, the Breast Cancer Tissue Bank (http:\\\\www.abctb.org.au) has implemented an auditing protocol and in order to efficiently execute the process, has developed an open source database plug-in tool (eAuditor) to assist in auditing of data held in our tissue bank database. Using eAuditor, we have identified that human entry errors range from 0.01% when entering donor's clinical follow-up details, to 0.53% when entering pathological details, highlighting the importance of an audit protocol tool such as eAuditor in a tissue bank database. eAuditor was developed and tested on the Caisis open source clinical-research database; however, it can be integrated in other databases where similar functionality is required.

  12. A family of multi-party authentication protocols

    NARCIS (Netherlands)

    Cremers, C.J.F.; Mauw, S.

    2006-01-01

    We introduce a family of multi-party authentication protocols and discuss six novel protocols, which are members of this family. The first three generalize the well-known Needham-Schroeder-Lowe public-key protocol, the Needham-Schroeder private-key protocol, and the Bilateral Key Exchange protocol.

  13. MeshTree: A Delay optimised Overlay Multicast Tree Building Protocol

    OpenAIRE

    Tan, Su-Wei; Waters, A. Gill; Crawford, John

    2005-01-01

    We study decentralised low delay degree-constrained overlay multicast tree construction for single source real-time applications. This optimisation problem is NP-hard even if computed centrally. We identify two problems in traditional distributed solutions, namely the greedy problem and delay-cost trade-off. By offering solutions to these problems, we propose a new self-organising distributed tree building protocol called MeshTree. The main idea is to embed the delivery tree in a degree-bound...

  14. Development and characterization of a tunable ultrafast X-ray source via inverse-Compton-scattering

    International Nuclear Information System (INIS)

    Jochmann, Axel

    2014-01-01

    Ultrashort, nearly monochromatic hard X-ray pulses enrich the understanding of the dynamics and function of matter, e.g., the motion of atomic structures associated with ultrafast phase transitions, structural dynamics and (bio)chemical reactions. Inverse Compton backscattering of intense laser pulses from relativistic electrons not only allows for the generation of bright X-ray pulses which can be used in a pump-probe experiment, but also for the investigation of the electron beam dynamics at the interaction point. The focus of this PhD work lies on the detailed understanding of the kinematics during the interaction of the relativistic electron bunch and the laser pulse in order to quantify the influence of various experiment parameters on the emitted X-ray radiation. The experiment was conducted at the ELBE center for high power radiation sources using the ELBE superconducting linear accelerator and the DRACO Ti:sapphire laser system. The combination of both these state-of-the-art apparatuses guaranteed the control and stability of the interacting beam parameters throughout the measurement. The emitted X-ray spectra were detected with a pixelated detector of 1024 by 256 elements (each 26μm by 26μm) to achieve an unprecedented spatial and energy resolution for a full characterization of the emitted spectrum to reveal parameter influences and correlations of both interacting beams. In this work the influence of the electron beam energy, electron beam emittance, the laser bandwidth and the energy-anglecorrelation on the spectra of the backscattered X-rays is quantified. A rigorous statistical analysis comparing experimental data to ab-initio 3D simulations enabled, e.g., the extraction of the angular distribution of electrons with 1.5% accuracy and, in total, provides predictive capability for the future high brightness hard X-ray source PHOENIX (Photon electron collider for Narrow bandwidth Intense X-rays) and potential all optical gamma-ray sources. The results

  15. Free-space measurement-device-independent quantum-key-distribution protocol using decoy states with orbital angular momentum

    Science.gov (United States)

    Wang, Le; Zhao, Sheng-Mei; Gong, Long-Yan; Cheng, Wei-Wen

    2015-12-01

    In this paper, we propose a measurement-device-independent quantum-key-distribution (MDI-QKD) protocol using orbital angular momentum (OAM) in free space links, named the OAM-MDI-QKD protocol. In the proposed protocol, the OAM states of photons, instead of polarization states, are used as the information carriers to avoid the reference frame alignment, the decoy-state is adopted to overcome the security loophole caused by the weak coherent pulse source, and the high efficient OAM-sorter is adopted as the measurement tool for Charlie to obtain the output OAM state. Here, Charlie may be an untrusted third party. The results show that the authorized users, Alice and Bob, could distill a secret key with Charlie’s successful measurements, and the key generation performance is slightly better than that of the polarization-based MDI-QKD protocol in the two-dimensional OAM cases. Simultaneously, Alice and Bob can reduce the number of flipping the bits in the secure key distillation. It is indicated that a higher key generation rate performance could be obtained by a high dimensional OAM-MDI-QKD protocol because of the unlimited degree of freedom on OAM states. Moreover, the results show that the key generation rate and the transmission distance will decrease as the growth of the strength of atmospheric turbulence (AT) and the link attenuation. In addition, the decoy states used in the proposed protocol can get a considerable good performance without the need for an ideal source. Project supported by the National Natural Science Foundation of China (Grant Nos. 61271238 and 61475075), the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20123223110003), the Natural Science Research Foundation for Universities of Jiangsu Province of China (Grant No. 11KJA510002), the Open Research Fund of Key Laboratory of Broadband Wireless Communication and Sensor Network Technology, Ministry of Education, China (Grant No. NYKL2015011), and the

  16. Potential of Wake-Up Radio-Based MAC Protocols for Implantable Body Sensor Networks (IBSN—A Survey

    Directory of Open Access Journals (Sweden)

    Vignesh Raja Karuppiah Ramachandran

    2016-11-01

    Full Text Available With the advent of nano-technology, medical sensors and devices are becoming highly miniaturized. Consequently, the number of sensors and medical devices being implanted to accurately monitor and diagnose a disease is increasing. By measuring the symptoms and controlling a medical device as close as possible to the source, these implantable devices are able to save lives. A wireless link between medical sensors and implantable medical devices is essential in the case of closed-loop medical devices, in which symptoms of the diseases are monitored by sensors that are not placed in close proximity of the therapeutic device. Medium Access Control (MAC is crucial to make it possible for several medical devices to communicate using a shared wireless medium in such a way that minimum delay, maximum throughput, and increased network life-time are guaranteed. To guarantee this Quality of Service (QoS, the MAC protocols control the main sources of limited resource wastage, namely the idle-listening, packet collisions, over-hearing, and packet loss. Traditional MAC protocols designed for body sensor networks are not directly applicable to Implantable Body Sensor Networks (IBSN because of the dynamic nature of the radio channel within the human body and the strict QoS requirements of IBSN applications. Although numerous MAC protocols are available in the literature, the majority of them are designed for Body Sensor Network (BSN and Wireless Sensor Network (WSN. To the best of our knowledge, there is so far no research paper that explores the impact of these MAC protocols specifically for IBSN. MAC protocols designed for implantable devices are still in their infancy and one of their most challenging objectives is to be ultra-low-power. One of the technological solutions to achieve this objective so is to integrate the concept of Wake-up radio (WuR into the MAC design. In this survey, we present a taxonomy of MAC protocols based on their use of Wu

  17. Detection and characterization of lightning-based sources using continuous wavelet transform: application to audio-magnetotellurics

    Science.gov (United States)

    Larnier, H.; Sailhac, P.; Chambodut, A.

    2018-01-01

    Atmospheric electromagnetic waves created by global lightning activity contain information about electrical processes of the inner and the outer Earth. Large signal-to-noise ratio events are particularly interesting because they convey information about electromagnetic properties along their path. We introduce a new methodology to automatically detect and characterize lightning-based waves using a time-frequency decomposition obtained through the application of continuous wavelet transform. We focus specifically on three types of sources, namely, atmospherics, slow tails and whistlers, that cover the frequency range 10 Hz to 10 kHz. Each wave has distinguishable characteristics in the time-frequency domain due to source shape and dispersion processes. Our methodology allows automatic detection of each type of event in the time-frequency decomposition thanks to their specific signature. Horizontal polarization attributes are also recovered in the time-frequency domain. This procedure is first applied to synthetic extremely low frequency time-series with different signal-to-noise ratios to test for robustness. We then apply it on real data: three stations of audio-magnetotelluric data acquired in Guadeloupe, oversea French territories. Most of analysed atmospherics and slow tails display linear polarization, whereas analysed whistlers are elliptically polarized. The diversity of lightning activity is finally analysed in an audio-magnetotelluric data processing framework, as used in subsurface prospecting, through estimation of the impedance response functions. We show that audio-magnetotelluric processing results depend mainly on the frequency content of electromagnetic waves observed in processed time-series, with an emphasis on the difference between morning and afternoon acquisition. Our new methodology based on the time-frequency signature of lightning-induced electromagnetic waves allows automatic detection and characterization of events in audio

  18. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  19. Characterization and indentification of air pollution sources in Metro Manila

    International Nuclear Information System (INIS)

    Santos, Flora L.; Pabroa, Preciosa Corazon B.; Racho, Joseph Michael D.; Morco, Ryan P.; Bautista VII, Angel T.; Bucal, Camille Grace D.

    2010-01-01

    Air particulates matter (PM 1 0 and PM 2 .5) is a mixture of different pollutant sources which can be of anthropogenic and/or natural origin. Identification and apportionment of pollutant sources is important to be able to have better understanding of prevailing conditions in the area and thus better air quality management can be applied. Results have shown that in all the sampling sites, a major fraction of pollutant sources come from vehicular or traffic-oriented sources, comprising more than 30% of PM 2 .5. Of particular great concern especially in the residents of the area are the high Pb levels in Valenzuela City. In 2005, the annual mean level of PM 1 0 Pb in Valenzuela was 0.267 μg/m 3 while the other PNRI sampling sites registered annual mean levels between 0033 to 0.085 μ/m 3 . The high Pb condition is reflected in the source apportionment studies with Pb sources showing up in both the coarse (PM 1 0-2.5) and the fine fractions (PM 2 .5). The CPF analysis plots of 2008 Pb levels in both the coarse and the fine fractions show patterns for probable sources in 2008. Further study of the location of battery recycling facilities and other possible sources of lead is needed to validate the results of the CPF determination. (author)

  20. Development of methodology for the characterization of radioactive sealed sources

    International Nuclear Information System (INIS)

    Ferreira, Robson de Jesus

    2010-01-01

    Sealed radioactive sources are widely used in many applications of nuclear technology in industry, medicine, research and others. The International Atomic Energy Agency (IAEA) estimates tens of millions sources in the world. In Brazil, the number is about 500 thousand sources, if the Americium-241 sources present in radioactive lightning rods and smoke detectors are included in the inventory. At the end of the useful life, most sources become disused, constitute a radioactive waste, and are then termed spent sealed radioactive sources (SSRS). In Brazil, this waste is collected by the research institutes of the Nuclear Commission of Nuclear Energy and kept under centralized storage, awaiting definition of the final disposal route. The Waste Management Laboratory (WML) at the Nuclear and Energy Research Institute is the main storage center, having received until July 2010 about 14.000 disused sources, not including the tens of thousands of lightning rod and smoke detector sources. A program is underway in the WML to replacing the original shielding by a standard disposal package and to determining the radioisotope content and activity of each one. The identification of the radionuclides and the measurement of activities will be carried out with a well type ionization chamber. This work aims to develop a methodology for measuring or to determine the activity SSRS stored in the WML accordance with its geometry and determine their uncertainties. (author)

  1. High-pitch dual-source CT angiography of the whole aorta without ECG synchronisation: Initial experience

    International Nuclear Information System (INIS)

    Beeres, Martin; Schell, Boris; Mastragelopoulos, Aristidis; Kerl, Josef Matthias; Gruber-Rouh, Tatjana; Lee, Clara; Siebenhandl, Petra; Bodelle, Boris; Zangos, Stephan; Vogl, Thomas J.; Jacobi, Volkmar; Bauer, Ralf W.; Herrmann, Eva

    2012-01-01

    To investigate the feasibility, image quality and radiation dose for high-pitch dual-source CT angiography (CTA) of the whole aorta without ECG synchronisation. Each group of 40 patients underwent CTA either on a 16-slice (group 1) or dual-source CT device with conventional single-source (group 2) or high-pitch mode with a pitch of 3.0 (group 3). The presence of motion or stair-step artefacts of the thoracic aorta was independently assessed by two readers. Subjective and objective scoring of motion and artefacts were significantly reduced in the high-pitch examination protocol (p < 0.05). The imaging length was not significantly different, but the imaging time was significantly (p < 0.001) shorter in the high-pitch group (12.2 vs. 7.4 vs. 1.7 s for groups 1, 2 and 3). The ascending aorta and the coronary ostia were reliably evaluable in all patients of group 3 without motion artefacts as well. High-pitch dual-source CT angiography of the whole aorta is feasible in unselected patients. As a significant advantage over regular pitch protocols, motion-free imaging of the aorta is possible without ECG synchronisation. Thus, this CT mode bears potential to become a standard CT protocol before trans-catheter aortic valve implantation (TAVI). (orig.)

  2. Development of a field testing protocol for identifying Deepwater Horizon oil spill residues trapped near Gulf of Mexico beaches

    Science.gov (United States)

    Han, Yuling

    2018-01-01

    The Deepwater Horizon (DWH) accident, one of the largest oil spills in U.S. history, contaminated several beaches located along the Gulf of Mexico (GOM) shoreline. The residues from the spill still continue to be deposited on some of these beaches. Methods to track and monitor the fate of these residues require approaches that can differentiate the DWH residues from other types of petroleum residues. This is because, historically, the crude oil released from sources such as natural seeps and anthropogenic discharges have also deposited other types of petroleum residues on GOM beaches. Therefore, identifying the origin of these residues is critical for developing effective management strategies for monitoring the long-term environmental impacts of the DWH oil spill. Advanced fingerprinting methods that are currently used for identifying the source of oil spill residues require detailed laboratory studies, which can be cost-prohibitive. Also, most agencies typically use untrained workers or volunteers to conduct shoreline monitoring surveys and these worker will not have access to advanced laboratory facilities. Furthermore, it is impractical to routinely fingerprint large volumes of samples that are collected after a major oil spill event, such as the DWH spill. In this study, we propose a simple field testing protocol that can identify DWH oil spill residues based on their unique physical characteristics. The robustness of the method is demonstrated by testing a variety of oil spill samples, and the results are verified by characterizing the samples using advanced chemical fingerprinting methods. The verification data show that the method yields results that are consistent with the results derived from advanced fingerprinting methods. The proposed protocol is a reliable, cost-effective, practical field approach for differentiating DWH residues from other types of petroleum residues. PMID:29329313

  3. Development of a field testing protocol for identifying Deepwater Horizon oil spill residues trapped near Gulf of Mexico beaches.

    Science.gov (United States)

    Han, Yuling; Clement, T Prabhakar

    2018-01-01

    The Deepwater Horizon (DWH) accident, one of the largest oil spills in U.S. history, contaminated several beaches located along the Gulf of Mexico (GOM) shoreline. The residues from the spill still continue to be deposited on some of these beaches. Methods to track and monitor the fate of these residues require approaches that can differentiate the DWH residues from other types of petroleum residues. This is because, historically, the crude oil released from sources such as natural seeps and anthropogenic discharges have also deposited other types of petroleum residues on GOM beaches. Therefore, identifying the origin of these residues is critical for developing effective management strategies for monitoring the long-term environmental impacts of the DWH oil spill. Advanced fingerprinting methods that are currently used for identifying the source of oil spill residues require detailed laboratory studies, which can be cost-prohibitive. Also, most agencies typically use untrained workers or volunteers to conduct shoreline monitoring surveys and these worker will not have access to advanced laboratory facilities. Furthermore, it is impractical to routinely fingerprint large volumes of samples that are collected after a major oil spill event, such as the DWH spill. In this study, we propose a simple field testing protocol that can identify DWH oil spill residues based on their unique physical characteristics. The robustness of the method is demonstrated by testing a variety of oil spill samples, and the results are verified by characterizing the samples using advanced chemical fingerprinting methods. The verification data show that the method yields results that are consistent with the results derived from advanced fingerprinting methods. The proposed protocol is a reliable, cost-effective, practical field approach for differentiating DWH residues from other types of petroleum residues.

  4. Electron cyclotron resonance ion source plasma characterization by X-ray spectroscopy and X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Mascali, David, E-mail: davidmascali@lns.infn.it; Castro, Giuseppe; Celona, Luigi; Neri, Lorenzo; Gammino, Santo [INFN–Laboratori Nazionali del Sud, Via S. Sofia 62, 95125 Catania (Italy); Biri, Sándor; Rácz, Richárd; Pálinkás, József [Institute for Nuclear Research (Atomki), Hungarian Academy of Sciences, Bem tér 18/c, H-4026 Debrecen (Hungary); Caliri, Claudia [INFN–Laboratori Nazionali del Sud, Via S. Sofia 62, 95125 Catania (Italy); Università degli Studi di Catania, Dip.to di Fisica e Astronomia, via Santa Sofia 64, 95123 Catania (Italy); Romano, Francesco Paolo [INFN–Laboratori Nazionali del Sud, Via S. Sofia 62, 95125 Catania (Italy); CNR, Istituto per i Beni Archeologici e Monumentali, Via Biblioteca 4, 95124 Catania (Italy); Torrisi, Giuseppe [INFN–Laboratori Nazionali del Sud, Via S. Sofia 62, 95125 Catania (Italy); Università Mediterranea di Reggio Calabria, DIIES, Via Graziella, I-89100 Reggio Calabria (Italy)

    2016-02-15

    An experimental campaign aiming to investigate electron cyclotron resonance (ECR) plasma X-ray emission has been recently carried out at the ECRISs—Electron Cyclotron Resonance Ion Sources laboratory of Atomki based on a collaboration between the Debrecen and Catania ECR teams. In a first series, the X-ray spectroscopy was performed through silicon drift detectors and high purity germanium detectors, characterizing the volumetric plasma emission. The on-purpose developed collimation system was suitable for direct plasma density evaluation, performed “on-line” during beam extraction and charge state distribution characterization. A campaign for correlating the plasma density and temperature with the output charge states and the beam intensity for different pumping wave frequencies, different magnetic field profiles, and single-gas/gas-mixing configurations was carried out. The results reveal a surprisingly very good agreement between warm-electron density fluctuations, output beam currents, and the calculated electromagnetic modal density of the plasma chamber. A charge-coupled device camera coupled to a small pin-hole allowing X-ray imaging was installed and numerous X-ray photos were taken in order to study the peculiarities of the ECRIS plasma structure.

  5. A Cryptographic Moving-Knife Cake-Cutting Protocol

    Directory of Open Access Journals (Sweden)

    Yoshifumi Manabe

    2012-02-01

    Full Text Available This paper proposes a cake-cutting protocol using cryptography when the cake is a heterogeneous good that is represented by an interval on a real line. Although the Dubins-Spanier moving-knife protocol with one knife achieves simple fairness, all players must execute the protocol synchronously. Thus, the protocol cannot be executed on asynchronous networks such as the Internet. We show that the moving-knife protocol can be executed asynchronously by a discrete protocol using a secure auction protocol. The number of cuts is n-1 where n is the number of players, which is the minimum.

  6. Medicina array demonstrator: calibration and radiation pattern characterization using a UAV-mounted radio-frequency source

    Science.gov (United States)

    Pupillo, G.; Naldi, G.; Bianchi, G.; Mattana, A.; Monari, J.; Perini, F.; Poloni, M.; Schiaffino, M.; Bolli, P.; Lingua, A.; Aicardi, I.; Bendea, H.; Maschio, P.; Piras, M.; Virone, G.; Paonessa, F.; Farooqui, Z.; Tibaldi, A.; Addamo, G.; Peverini, O. A.; Tascone, R.; Wijnholds, S. J.

    2015-06-01

    One of the most challenging aspects of the new-generation Low-Frequency Aperture Array (LFAA) radio telescopes is instrument calibration. The operational LOw-Frequency ARray (LOFAR) instrument and the future LFAA element of the Square Kilometre Array (SKA) require advanced calibration techniques to reach the expected outstanding performance. In this framework, a small array, called Medicina Array Demonstrator (MAD), has been designed and installed in Italy to provide a test bench for antenna characterization and calibration techniques based on a flying artificial test source. A radio-frequency tone is transmitted through a dipole antenna mounted on a micro Unmanned Aerial Vehicle (UAV) (hexacopter) and received by each element of the array. A modern digital FPGA-based back-end is responsible for both data-acquisition and data-reduction. A simple amplitude and phase equalization algorithm is exploited for array calibration owing to the high stability and accuracy of the developed artificial test source. Both the measured embedded element patterns and calibrated array patterns are found to be in good agreement with the simulated data. The successful measurement campaign has demonstrated that a UAV-mounted test source provides a means to accurately validate and calibrate the full-polarized response of an antenna/array in operating conditions, including consequently effects like mutual coupling between the array elements and contribution of the environment to the antenna patterns. A similar system can therefore find a future application in the SKA-LFAA context.

  7. The Simplest Protocol for Oblivious Transfer

    DEFF Research Database (Denmark)

    Chou, Tung; Orlandi, Claudio

    2015-01-01

    Oblivious Transfer (OT) is the fundamental building block of cryptographic protocols. In this paper we describe the simplest and most efficient protocol for 1-out-of-n OT to date, which is obtained by tweaking the Diffie-Hellman key-exchange protocol. The protocol achieves UC-security against...... active and adaptive corruptions in the random oracle model. Due to its simplicity, the protocol is extremely efficient and it allows to perform m 1-out-of-n OTs using only: - Computation: (n+1)m+2 exponentiations (mn for the receiver, mn+2 for the sender) and - Communication: 32(m+1) bytes (for the group...... optimizations) is at least one order of magnitude faster than previous work. Category / Keywords: cryptographic protocols / Oblivious Transfer, UC Security, Elliptic Curves, Efficient Implementation...

  8. ABS-SmartComAgri: An Agent-Based Simulator of Smart Communication Protocols in Wireless Sensor Networks for Debugging in Precision Agriculture.

    Science.gov (United States)

    García-Magariño, Iván; Lacuesta, Raquel; Lloret, Jaime

    2018-03-27

    Smart communication protocols are becoming a key mechanism for improving communication performance in networks such as wireless sensor networks. However, the literature lacks mechanisms for simulating smart communication protocols in precision agriculture for decreasing production costs. In this context, the current work presents an agent-based simulator of smart communication protocols for efficiently managing pesticides. The simulator considers the needs of electric power, crop health, percentage of alive bugs and pesticide consumption. The current approach is illustrated with three different communication protocols respectively called (a) broadcast, (b) neighbor and (c) low-cost neighbor. The low-cost neighbor protocol obtained a statistically-significant reduction in the need of electric power over the neighbor protocol, with a very large difference according to the common interpretations about the Cohen's d effect size. The presented simulator is called ABS-SmartComAgri and is freely distributed as open-source from a public research data repository. It ensures the reproducibility of experiments and allows other researchers to extend the current approach.

  9. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  10. Energy Efficient Clustering Protocol to Enhance Performance of Heterogeneous Wireless Sensor Network: EECPEP-HWSN

    Directory of Open Access Journals (Sweden)

    Santosh V. Purkar

    2018-01-01

    Full Text Available Heterogeneous wireless sensor network (HWSN fulfills the requirements of researchers in the design of real life application to resolve the issues of unattended problem. But, the main constraint faced by researchers is the energy source available with sensor nodes. To prolong the life of sensor nodes and thus HWSN, it is necessary to design energy efficient operational schemes. One of the most suitable approaches to enhance energy efficiency is the clustering scheme, which enhances the performance parameters of WSN. A novel solution proposed in this article is to design an energy efficient clustering protocol for HWSN, to enhance performance parameters by EECPEP-HWSN. The proposed protocol is designed with three level nodes namely normal, advanced, and super, respectively. In the clustering process, for selection of cluster head we consider different parameters available with sensor nodes at run time that is, initial energy, hop count, and residual energy. This protocol enhances the energy efficiency of HWSN and hence improves energy remaining in the network, stability, lifetime, and hence throughput. It has been found that the proposed protocol outperforms than existing well-known LEACH, DEEC, and SEP with about 188, 150, and 141 percent respectively.

  11. Model Additional Protocol

    International Nuclear Information System (INIS)

    Rockwood, Laura

    2001-01-01

    Since the end of the cold war a series of events has changed the circumstances and requirements of the safeguards system. The discovery of a clandestine nuclear weapons program in Iraq, the continuing difficulty in verifying the initial report of Democratic People's Republic of Korea upon entry into force of their safeguards agreement, and the decision of the South African Government to give up its nuclear weapons program and join the Treaty on the Non-Proliferation of Nuclear Weapons have all played a role in an ambitious effort by IAEA Member States and the Secretariat to strengthen the safeguards system. A major milestone in this effort was reached in May 1997 when the IAEA Board of Governors approved a Model Protocol Additional to Safeguards Agreements. The Model Additional Protocol was negotiated over a period of less than a year by an open-ended committee of the Board involving some 70 Member States and two regional inspectorates. The IAEA is now in the process of negotiating additional protocols, State by State, and implementing them. These additional protocols will provide the IAEA with rights of access to information about all activities related to the use of nuclear material in States with comprehensive safeguards agreements and greatly expanded physical access for IAEA inspectors to confirm or verify this information. In conjunction with this, the IAEA is working on the integration of these measures with those provided for in comprehensive safeguards agreements, with a view to maximizing the effectiveness and efficiency, within available resources, the implementation of safeguards. Details concerning the Model Additional Protocol are given. (author)

  12. Asymptotic adaptive bipartite entanglement-distillation protocol

    International Nuclear Information System (INIS)

    Hostens, Erik; Dehaene, Jeroen; De Moor, Bart

    2006-01-01

    We present an asymptotic bipartite entanglement-distillation protocol that outperforms all existing asymptotic schemes. This protocol is based on the breeding protocol with the incorporation of two-way classical communication. Like breeding, the protocol starts with an infinite number of copies of a Bell-diagonal mixed state. Breeding can be carried out as successive stages of partial information extraction, yielding the same result: one bit of information is gained at the cost (measurement) of one pure Bell state pair (ebit). The basic principle of our protocol is at every stage to replace measurements on ebits by measurements on a finite number of copies, whenever there are two equiprobable outcomes. In that case, the entropy of the global state is reduced by more than one bit. Therefore, every such replacement results in an improvement of the protocol. We explain how our protocol is organized as to have as many replacements as possible. The yield is then calculated for Werner states

  13. Actions of a protocol for radioactive waste management

    International Nuclear Information System (INIS)

    Sousa, Joyce Caroline de Oliveira; Andrade, Idalmar Gomes da Silva; Frazão, Denys Wanderson Pereira; Abreu, Lukas Maxwell Oliveira de; França, Clyslane Alves; Macedo, Paulo de Tarso Silva de

    2017-01-01

    Radioactive wastes are all those materials generated in the various uses of radioactive materials, which can not be reused and which have radioactive substances in quantities that can not be treated as ordinary waste. All management of these wastes must be carried out carefully, including actions ranging from its collection to the point where they are generated to their final destination. However, any and all procedures must be carried out in order to comply with the requirements for the protection of workers, individuals, the public and the environment. The final product of the study was a descriptive tutorial on the procedures and actions of a standard radioactive waste management protocol developed from scientific publications on radiation protection. The management of radioactive waste is one of the essential procedures in the radiological protection of man and the environment where the manipulation of radioactive materials occurs. The standard radioactive management protocol includes: collection, segregation of various types of wastes, transport, characterization, treatment, storage and final disposal. The radioactive wastes typology interferes with sequencing and the way in which actions are developed. The standardization of mechanisms in the management of radioactive waste contributes to the radiological safety of all those involved

  14. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    19 organochlorine pesticides by gas chromatography. Only three of these samples had detectable pesticide concentrations. A separate sample of A-horizon soil was collected for microbial characterization by phospholipid fatty acid analysis (PLFA), soil enzyme assays, and determination of selected human and agricultural pathogens. Collection, preservation and analysis of samples for both organic compounds and microbial characterization add a great degree of complication to the sampling and preservation protocols and a significant increase to the cost for a continental-scale survey. Both these issues must be considered carefully prior to adopting these parameters as part of the soil geochemical survey of North America.

  15. ASSESSMENT OF RIP-V1 AND OSPF-V2 PROTOCOL WITH CONSIDERATION OF CONVERGENCE CRITERIA AND SENDING PROTOCOLS TRAFFIC

    Directory of Open Access Journals (Sweden)

    Hamed Jelodar

    2014-03-01

    Full Text Available Routing Protocols are underlying principles in networks like internet, transport and mobile. Routing Protocols include a series of rules and algorithms that consider routing metric and select the best way for sending healthy data packets from origin to destination. Dynamic routing protocol compatible to topology has a changeable state. RIP and OSPF are dynamic routing protocol that we consider criteria like convergence and sending protocols traffic assessment RIP first version and OSPF second version. By the test we have done on OPNET stimulation we understood that the OSPF protocol was more efficient than RIP protocol.

  16. Performance evaluation of a permanent ring magnet based helicon plasma source for negative ion source research

    Science.gov (United States)

    Pandey, Arun; Bandyopadhyay, M.; Sudhir, Dass; Chakraborty, A.

    2017-10-01

    Helicon wave heated plasmas are much more efficient in terms of ionization per unit power consumed. A permanent magnet based compact helicon wave heated plasma source is developed in the Institute for Plasma Research, after carefully optimizing the geometry, the frequency of the RF power, and the magnetic field conditions. The HELicon Experiment for Negative ion-I source is the single driver helicon plasma source that is being studied for the development of a large sized, multi-driver negative hydrogen ion source. In this paper, the details about the single driver machine and the results from the characterization of the device are presented. A parametric study at different pressures and magnetic field values using a 13.56 MHz RF source has been carried out in argon plasma, as an initial step towards source characterization. A theoretical model is also presented for the particle and power balance in the plasma. The ambipolar diffusion process taking place in a magnetized helicon plasma is also discussed.

  17. Objective and automated protocols for the evaluation of biomedical search engines using No Title Evaluation protocols.

    Science.gov (United States)

    Campagne, Fabien

    2008-02-29

    The evaluation of information retrieval techniques has traditionally relied on human judges to determine which documents are relevant to a query and which are not. This protocol is used in the Text Retrieval Evaluation Conference (TREC), organized annually for the past 15 years, to support the unbiased evaluation of novel information retrieval approaches. The TREC Genomics Track has recently been introduced to measure the performance of information retrieval for biomedical applications. We describe two protocols for evaluating biomedical information retrieval techniques without human relevance judgments. We call these protocols No Title Evaluation (NT Evaluation). The first protocol measures performance for focused searches, where only one relevant document exists for each query. The second protocol measures performance for queries expected to have potentially many relevant documents per query (high-recall searches). Both protocols take advantage of the clear separation of titles and abstracts found in Medline. We compare the performance obtained with these evaluation protocols to results obtained by reusing the relevance judgments produced in the 2004 and 2005 TREC Genomics Track and observe significant correlations between performance rankings generated by our approach and TREC. Spearman's correlation coefficients in the range of 0.79-0.92 are observed comparing bpref measured with NT Evaluation or with TREC evaluations. For comparison, coefficients in the range 0.86-0.94 can be observed when evaluating the same set of methods with data from two independent TREC Genomics Track evaluations. We discuss the advantages of NT Evaluation over the TRels and the data fusion evaluation protocols introduced recently. Our results suggest that the NT Evaluation protocols described here could be used to optimize some search engine parameters before human evaluation. Further research is needed to determine if NT Evaluation or variants of these protocols can fully substitute

  18. Antioxidants: Characterization, natural sources, extraction and analysis.

    Science.gov (United States)

    Oroian, Mircea; Escriche, Isabel

    2015-08-01

    Recently many review papers regarding antioxidants from different sources and different extraction and quantification procedures have been published. However none of them has all the information regarding antioxidants (chemistry, sources, extraction and quantification). This article tries to take a different perspective on antioxidants for the new researcher involved in this field. Antioxidants from fruit, vegetables and beverages play an important role in human health, for example preventing cancer and cardiovascular diseases, and lowering the incidence of different diseases. In this paper the main classes of antioxidants are presented: vitamins, carotenoids and polyphenols. Recently, many analytical methodologies involving diverse instrumental techniques have been developed for the extraction, separation, identification and quantification of these compounds. Antioxidants have been quantified by different researchers using one or more of these methods: in vivo, in vitro, electrochemical, chemiluminescent, electron spin resonance, chromatography, capillary electrophoresis, nuclear magnetic resonance, near infrared spectroscopy and mass spectrometry methods. Copyright © 2015. Published by Elsevier Ltd.

  19. Controllable quantum private queries using an entangled Fibonacci-sequence spiral source

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Hong, E-mail: honglaimm@163.com [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Department of Computing, Macquarie University, Sydney, NSW 2109 (Australia); School of Science, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Orgun, Mehmet A. [Department of Computing, Macquarie University, Sydney, NSW 2109 (Australia); Pieprzyk, Josef [School of Electrical Engineering and Computer Science, Queensland University of Technology, Brisbane, QLD 4000 (Australia); Xiao, Jinghua [School of Science, Beijing University of Posts and Telecommunications, Beijing 100876 (China); Xue, Liyin [Corporate Analytics, The Australian Taxation Office, Sydney NSW 2000 (Australia); Jia, Zhongtian, E-mail: ise_jiazt@ujn.edu.cn [Provincial Key Laboratory for Network Based Intelligent Computing, University of Jinan, Jinan 250022 (China)

    2015-10-23

    Highlights: • Alice can easily control the size of a block by adjusting the parameter m rather than a high-dimension oracle. • The case of Alice knowing an exact multi-bit message can be realized deterministically. • Our protocol provides broad measures of protection against errors caused by the effect of noise. • Our protocol can greatly save both quantum and classical communication and exhibit some advantages in security. • Our protocol is scalable and flexible, and secure against quantum memory attacks by Alice. - Abstract: By changing the initial values in entangled Fibonacci-sequence spiral sources in Simon et al.'s (2013) experimental setup [13], we propose a controllable quantum private query protocol. Moreover, our protocol achieves flexible key expansion and even exhibits secure advantages during communications because of the following observations. We observe the close relationships between Lucas numbers and the first kind of Chebyshev maps, and the Chebyshev maps and k-Chebyshev maps; by adjusting the parameter m in k-Chebyshev maps, Alice and Bob can obtain their expected values of the key blocks and database respectively.

  20. Controllable quantum private queries using an entangled Fibonacci-sequence spiral source

    International Nuclear Information System (INIS)

    Lai, Hong; Orgun, Mehmet A.; Pieprzyk, Josef; Xiao, Jinghua; Xue, Liyin; Jia, Zhongtian

    2015-01-01

    Highlights: • Alice can easily control the size of a block by adjusting the parameter m rather than a high-dimension oracle. • The case of Alice knowing an exact multi-bit message can be realized deterministically. • Our protocol provides broad measures of protection against errors caused by the effect of noise. • Our protocol can greatly save both quantum and classical communication and exhibit some advantages in security. • Our protocol is scalable and flexible, and secure against quantum memory attacks by Alice. - Abstract: By changing the initial values in entangled Fibonacci-sequence spiral sources in Simon et al.'s (2013) experimental setup [13], we propose a controllable quantum private query protocol. Moreover, our protocol achieves flexible key expansion and even exhibits secure advantages during communications because of the following observations. We observe the close relationships between Lucas numbers and the first kind of Chebyshev maps, and the Chebyshev maps and k-Chebyshev maps; by adjusting the parameter m in k-Chebyshev maps, Alice and Bob can obtain their expected values of the key blocks and database respectively

  1. Chemical characterization of atmospheric particles and source apportionment in the vicinity of a steelmaking industry

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, S.M., E-mail: smarta@ctn.ist.utl.pt [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, Estrada Nacional 10, 139.7 km, 2695-066 Bobadela LRS (Portugal); Lage, J. [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, Estrada Nacional 10, 139.7 km, 2695-066 Bobadela LRS (Portugal); Fernández, B. [Global R& D, ArcelorMittal, Avilés (Spain); Garcia, S. [Instituto de Soldadura e Qualidade, Av. Prof. Dr. Cavaco Silva, 33, 2740-120 Porto Salvo (Portugal); Reis, M.A.; Chaves, P.C. [Centro de Ciências e Tecnologias Nucleares, Instituto Superior Técnico, Universidade de Lisboa, Estrada Nacional 10, 139.7 km, 2695-066 Bobadela LRS (Portugal)

    2015-07-15

    The objective of this work was to provide a chemical characterization of atmospheric particles collected in the vicinity of a steelmaking industry and to identify the sources that affect PM{sub 10} levels. A total of 94 PM samples were collected in two sampling campaigns that occurred in February and June/July of 2011. PM{sub 2.5} and PM{sub 2.5–10} were analyzed for a total of 22 elements by Instrumental Neutron Activation Analysis and Particle Induced X-ray Emission. The concentrations of water soluble ions in PM{sub 10} were measured by Ion Chromatography and Indophenol-Blue Spectrophotometry. Positive Matrix Factorization receptor model was used to identify sources of particulate matter and to determine their mass contribution to PM{sub 10}. Seven main groups of sources were identified: marine aerosol identified by Na and Cl (22%), steelmaking and sinter plant represented by As, Cr, Cu, Fe, Ni, Mn, Pb, Sb and Zn (11%), sinter plant stack identified by NH{sub 4}{sup +}, K and Pb (12%), an unidentified Br source (1.8%), secondary aerosol from coke making and blast furnace (19%), fugitive emissions from the handling of raw material, sinter plant and vehicles dust resuspension identified by Al, Ca, La, Si, Ti and V (14%) and sinter plant and blast furnace associated essentially with Fe and Mn (21%). - Highlights: • Emissions from steelworks are very complex. • The larger steelworks contribution to PM{sub 10} was from blast furnace and sinter plant. • Sinter plant stack emissions contributed for 12% of the PM{sub 10} mass. • Secondary aerosol from coke making and blast furnace contributed for 19% of the PM{sub 10}. • Fugitive dust emissions highly contribute to PM{sub 10} mass.

  2. Characterization of an ion beam produced by extraction and acceleration of ions from a wire plasma source

    International Nuclear Information System (INIS)

    Gueroult, R.

    2011-09-01

    In this study we first model a DC low pressure wire plasma source and then characterize the properties of an ion gun derived from the plasma source. In order to study the properties of the derived ion gun, we develop a particle-in-cell code fitted to the modelling of the wire plasma source operation, and validate it by confrontation with the results of an experimental study. In light of the simulation results, an analysis of the wire discharge in terms of a collisional Child-Langmuir ion flow in cylindrical geometry is proposed. We interpret the mode transition as a natural reorganization of the discharge when the current is increased above a threshold value which is a function of the discharge voltage, the pressure and the inter-electrodes distance. In addition, the analysis of the energy distribution function of ions impacting the cathode demonstrates the ability to extract an ion beam of low energy spread around the discharge voltage assuming that the discharge is operated in its high pressure mode. An ion source prototype allowing the extraction and acceleration of ions from the wire source is then proposed. The experimental study of such a device confirms that, apart from a shift corresponding to the accelerating voltage, the acceleration scheme does not spread the ion velocity distribution function along the axis of the beam. It is therefore possible to produce tunable energy (0 - 5 keV) ion beams of various ionic species presenting limited energy dispersion (∼ 10 eV). The typical beam currents are about a few tens of micro-amperes, and the divergence of such a beam is on the order of one degree. A numerical modelling of the ion source is eventually conducted in order to identify potential optimizations of the concept. (author)

  3. Silanization of silica and glass slides for DNA microarrays by impregnation and gas phase protocols: A comparative study

    International Nuclear Information System (INIS)

    Phaner-Goutorbe, Magali; Dugas, Vincent; Chevolot, Yann; Souteyrand, Eliane

    2011-01-01

    Surface immobilization of oligonucleotide probes (oligoprobes) is a key issue in the development of DNA-chips. The immobilization protocol should guarantee good availability of the probes, low non-specific adsorption and reproducibility. We have previously reported a silanization protocol with tert-butyl-11-(dimethylamino)silylundecanoate performed by impregnation (Impregnation Protocol, IP) of silica substrates from dilute silane solutions, leading to surfaces bearing carboxylic groups. In this paper, the Impregnation protocol is compared with a Gas phase Protocol (GP) which is more suited to industrial requirements such as reliable and robust processing, cost efficiency, etc.... The morphology of the oligoprobe films at the nanoscale (characterized by Atomic Force Microscopy) and the reproducibility of subsequent oligoprobes immobilization steps have been investigated for the two protocols on thermal silica (Si/SiO 2 ) and glass slide substrates. IP leads to smooth surfaces whereas GP induces the formation of islands features suggesting a non-continuous silane layer. The reproducibility of the overall surface layer (18.75 mm 2 ) has been evaluated through the covalent immobilization of a fluorescent oligoprobes. Average fluorescent signals of 6 (a.u.) and 4 (a.u.) were observed for IP and GP, respectively, with a standard deviation of 1 for both protocols. Thus, despite a morphological difference of the silane layer at the nanometer scale, the density of the immobilized probes remained similar.

  4. USA-USSR protocol

    CERN Multimedia

    1970-01-01

    On 30 November the USA Atomic Energy Commission and the USSR State Committee for the Utilization of Atomic Energy signed, in Washington, a protocol 'on carrying out of joint projects in the field of high energy physics at the accelerators of the National Accelerator Laboratory (Batavia) and the Institute for High Energy Physics (Serpukhov)'. The protocol will be in force for five years and can be extended by mutual agreement.

  5. An Open-Source Approach for Catchment's Physiographic Characterization

    Science.gov (United States)

    Di Leo, M.; Di Stefano, M.

    2013-12-01

    A water catchment's hydrologic response is intimately linked to its morphological shape, which is a signature on the landscape of the particular climate conditions that generated the hydrographic basin over time. Furthermore, geomorphologic structures influence hydrologic regimes and land cover (vegetation). For these reasons, a basin's characterization is a fundamental element in hydrological studies. Physiographic descriptors have been extracted manually for long time, but currently Geographic Information System (GIS) tools ease such task by offering a powerful instrument for hydrologists to save time and improve accuracy of result. Here we present a program combining the flexibility of the Python programming language with the reliability of GRASS GIS, which automatically performing the catchment's physiographic characterization. GRASS (Geographic Resource Analysis Support System) is a Free and Open Source GIS, that today can look back on 30 years of successful development in geospatial data management and analysis, image processing, graphics and maps production, spatial modeling and visualization. The recent development of new hydrologic tools, coupled with the tremendous boost in the existing flow routing algorithms, reduced the computational time and made GRASS a complete toolset for hydrological analysis even for large datasets. The tool presented here is a module called r.basin, based on GRASS' traditional nomenclature, where the "r" stands for "raster", and it is available for GRASS version 6.x and more recently for GRASS 7. As input it uses a Digital Elevation Model and the coordinates of the outlet, and, powered by the recently developed r.stream.* hydrological tools, it performs the flow calculation, delimits the basin's boundaries and extracts the drainage network, returning the flow direction and accumulation, the distance to outlet and the hill slopes length maps. Based on those maps, it calculates hydrologically meaningful shape factors and

  6. Interlaboratory comparison of real-time pcr protocols for quantification of general fecal indicator bacteria

    Science.gov (United States)

    Shanks, O.C.; Sivaganesan, M.; Peed, L.; Kelty, C.A.; Blackwood, A.D.; Greene, M.R.; Noble, R.T.; Bushon, R.N.; Stelzer, E.A.; Kinzelman, J.; Anan'Eva, T.; Sinigalliano, C.; Wanless, D.; Griffith, J.; Cao, Y.; Weisberg, S.; Harwood, V.J.; Staley, C.; Oshima, K.H.; Varma, M.; Haugland, R.A.

    2012-01-01

    The application of quantitative real-time PCR (qPCR) technologies for the rapid identification of fecal bacteria in environmental waters is being considered for use as a national water quality metric in the United States. The transition from research tool to a standardized protocol requires information on the reproducibility and sources of variation associated with qPCR methodology across laboratories. This study examines interlaboratory variability in the measurement of enterococci and Bacteroidales concentrations from standardized, spiked, and environmental sources of DNA using the Entero1a and GenBac3 qPCR methods, respectively. Comparisons are based on data generated from eight different research facilities. Special attention was placed on the influence of the DNA isolation step and effect of simplex and multiplex amplification approaches on interlaboratory variability. Results suggest that a crude lysate is sufficient for DNA isolation unless environmental samples contain substances that can inhibit qPCR amplification. No appreciable difference was observed between simplex and multiplex amplification approaches. Overall, interlaboratory variability levels remained low (<10% coefficient of variation) regardless of qPCR protocol. ?? 2011 American Chemical Society.

  7. [Computerized clinical protocol for occlusion].

    Science.gov (United States)

    Salsench, J; Ferrer, J; Nogueras, J

    1988-11-01

    In making a protocol it is necessary that all members of the team who are going to collect information have the same unity of criterion about the different variables that compose it. The drawing up of this document is as much or more necessary than the protocol itself. In this work we all data collected in the protocol and we give the explanations of each concept.

  8. Bioremediation protocols

    National Research Council Canada - National Science Library

    Sheehan, David

    1997-01-01

    ..., .. . . . . .. ,. . . .. . . . . . . . .. . . . . .. . . .. . .. 3 2 Granular Nina Sludge Christiansen, Consortia lndra for Bioremediation, M. Mathrani, and Birgitte K. Ahring . 23 PART II PROTOCOLS...

  9. Self-Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks

    KAUST Repository

    Elwhishi, Ahmed; Ho, Pin-Han; Naik, K.; Shihada, Basem

    2013-01-01

    This paper introduces a novel multicopy routing protocol, called Self-Adaptive Utility-based Routing Protocol (SAURP), for Delay Tolerant Networks (DTNs) that are possibly composed of a vast number of devices in miniature such as smart phones of heterogeneous capacities in terms of energy resources and buffer spaces. SAURP is characterized by the ability of identifying potential opportunities for forwarding messages to their destinations via a novel utility function-based mechanism, in which a suite of environment parameters, such as wireless channel condition, nodal buffer occupancy, and encounter statistics, are jointly considered. Thus, SAURP can reroute messages around nodes experiencing high-buffer occupancy, wireless interference, and/or congestion, while taking a considerably small number of transmissions. The developed utility function in SAURP is proved to be able to achieve optimal performance, which is further analyzed via a stochastic modeling approach. Extensive simulations are conducted to verify the developed analytical model and compare the proposed SAURP with a number of recently reported encounter-based routing approaches in terms of delivery ratio, delivery delay, and the number of transmissions required for each message delivery. The simulation results show that SAURP outperforms all the counterpart multicopy encounter-based routing protocols considered in the study.

  10. Self-Adaptive Contention Aware Routing Protocol for Intermittently Connected Mobile Networks

    KAUST Repository

    Elwhishi, Ahmed

    2013-07-01

    This paper introduces a novel multicopy routing protocol, called Self-Adaptive Utility-based Routing Protocol (SAURP), for Delay Tolerant Networks (DTNs) that are possibly composed of a vast number of devices in miniature such as smart phones of heterogeneous capacities in terms of energy resources and buffer spaces. SAURP is characterized by the ability of identifying potential opportunities for forwarding messages to their destinations via a novel utility function-based mechanism, in which a suite of environment parameters, such as wireless channel condition, nodal buffer occupancy, and encounter statistics, are jointly considered. Thus, SAURP can reroute messages around nodes experiencing high-buffer occupancy, wireless interference, and/or congestion, while taking a considerably small number of transmissions. The developed utility function in SAURP is proved to be able to achieve optimal performance, which is further analyzed via a stochastic modeling approach. Extensive simulations are conducted to verify the developed analytical model and compare the proposed SAURP with a number of recently reported encounter-based routing approaches in terms of delivery ratio, delivery delay, and the number of transmissions required for each message delivery. The simulation results show that SAURP outperforms all the counterpart multicopy encounter-based routing protocols considered in the study.

  11. Measurement and protocol for evaluating video and still stabilization systems

    Science.gov (United States)

    Cormier, Etienne; Cao, Frédéric; Guichard, Frédéric; Viard, Clément

    2013-01-01

    This article presents a system and a protocol to characterize image stabilization systems both for still images and videos. It uses a six axes platform, three being used for camera rotation and three for camera positioning. The platform is programmable and can reproduce complex motions that have been typically recorded by a gyroscope mounted on different types of cameras in different use cases. The measurement uses a single chart for still image and videos, the texture dead leaves chart. Although the proposed implementation of the protocol uses a motion platform, the measurement itself does not rely on any specific hardware. For still images, a modulation transfer function is measured in different directions and is weighted by a contrast sensitivity function (simulating the human visual system accuracy) to obtain an acutance. The sharpness improvement due to the image stabilization system is a good measurement of performance as recommended by a CIPA standard draft. For video, four markers on the chart are detected with sub-pixel accuracy to determine a homographic deformation between the current frame and a reference position. This model describes well the apparent global motion as translations, but also rotations along the optical axis and distortion due to the electronic rolling shutter equipping most CMOS sensors. The protocol is applied to all types of cameras such as DSC, DSLR and smartphones.

  12. ATM and Internet protocol

    CERN Document Server

    Bentall, M; Turton, B

    1998-01-01

    Asynchronous Transfer Mode (ATM) is a protocol that allows data, sound and video being transferred between independent networks via ISDN links to be supplied to, and interpreted by, the various system protocols.ATM and Internet Protocol explains the working of the ATM and B-ISDN network for readers with a basic understanding of telecommunications. It provides a handy reference to everyone working with ATM who may not require the full standards in detail, but need a comprehensive guide to ATM. A substantial section is devoted to the problems of running IP over ATM and there is some discussion o

  13. Group covariant protocols for quantum string commitment

    International Nuclear Information System (INIS)

    Tsurumaru, Toyohiro

    2006-01-01

    We study the security of quantum string commitment (QSC) protocols with group covariant encoding scheme. First we consider a class of QSC protocol, which is general enough to incorporate all the QSC protocols given in the preceding literatures. Then among those protocols, we consider group covariant protocols and show that the exact upperbound on the binding condition can be calculated. Next using this result, we prove that for every irreducible representation of a finite group, there always exists a corresponding nontrivial QSC protocol which reaches a level of security impossible to achieve classically

  14. Families of quantum fingerprinting protocols

    Science.gov (United States)

    Lovitz, Benjamin; Lütkenhaus, Norbert

    2018-03-01

    We introduce several families of quantum fingerprinting protocols to evaluate the equality function on two n -bit strings in the simultaneous message passing model. The original quantum fingerprinting protocol uses a tensor product of a small number of O (logn ) -qubit high-dimensional signals [H. Buhrman et al., Phys. Rev. Lett. 87, 167902 (2001), 10.1103/PhysRevLett.87.167902], whereas a recently proposed optical protocol uses a tensor product of O (n ) single-qubit signals, while maintaining the O (logn ) information leakage of the original protocol [J. M. Arazola and N. Lütkenhaus, Phys. Rev. A 89, 062305 (2014), 10.1103/PhysRevA.89.062305]. We find a family of protocols which interpolate between the original and optical protocols while maintaining the O (logn ) information leakage, thus demonstrating a tradeoff between the number of signals sent and the dimension of each signal. There has been interest in experimental realization of the recently proposed optical protocol using coherent states [F. Xu et al., Nat. Commun. 6, 8735 (2015), 10.1038/ncomms9735; J.-Y. Guan et al., Phys. Rev. Lett. 116, 240502 (2016), 10.1103/PhysRevLett.116.240502], but as the required number of laser pulses grows linearly with the input size n , eventual challenges for the long-time stability of experimental setups arise. We find a coherent state protocol which reduces the number of signals by a factor 1/2 while also reducing the information leakage. Our reduction makes use of a simple modulation scheme in optical phase space, and we find that more complex modulation schemes are not advantageous. Using a similar technique, we improve a recently proposed coherent state protocol for evaluating the Euclidean distance between two real unit vectors [N. Kumar et al., Phys. Rev. A 95, 032337 (2017), 10.1103/PhysRevA.95.032337] by reducing the number of signals by a factor 1/2 and also reducing the information leakage.

  15. A Dosimetric Characterization of the 137Cs Brachytherapy source to be used in Libyan Medical Centers

    International Nuclear Information System (INIS)

    Giaddui, T.; Eshaibani, R.; Assatel, O.

    2007-01-01

    A dosimetric characterization of the 137C s brachytherapy source to be used in Libyan medical centers was carried out using analytical and Monte Carlo investigations. The dose rates in air across the transverse axis were calculated using a Monte Carlo Code and the Sievert integral method. A good agreement between the results was achieved. The Monte Carlo Code was then used to calculate the two dimensional dose rates in water and isodose curves were generated. The latter results were used to calculate the dose rate at the reference point, radial dose function and the anisotropy function according to the American Association of Physicist in Medicine (AAPM) TG.43 formalism .

  16. In-memory interconnect protocol configuration registers

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  17. In-memory interconnect protocol configuration registers

    Science.gov (United States)

    Cheng, Kevin Y.; Roberts, David A.

    2017-09-19

    Systems, apparatuses, and methods for moving the interconnect protocol configuration registers into the main memory space of a node. The region of memory used for storing the interconnect protocol configuration registers may also be made cacheable to reduce the latency of accesses to the interconnect protocol configuration registers. Interconnect protocol configuration registers which are used during a startup routine may be prefetched into the host's cache to make the startup routine more efficient. The interconnect protocol configuration registers for various interconnect protocols may include one or more of device capability tables, memory-side statistics (e.g., to support two-level memory data mapping decisions), advanced memory and interconnect features such as repair resources and routing tables, prefetching hints, error correcting code (ECC) bits, lists of device capabilities, set and store base address, capability, device ID, status, configuration, capabilities, and other settings.

  18. Effectiveness of oxaliplatin desensitization protocols.

    Science.gov (United States)

    Cortijo-Cascajares, Susana; Nacle-López, Inmaculada; García-Escobar, Ignacio; Aguilella-Vizcaíno, María José; Herreros-de-Tejada, Alberto; Cortés-Funes Castro, Hernán; Calleja-Hernández, Miguel-Ángel

    2013-03-01

    Hypersensitivity reaction (HSR) to antineoplastic drugs can force doctors to stop treatment and seek other alternatives. These alternatives may be less effective, not as well tolerated and/or more expensive. Another option is to use desensitization protocols that induce a temporary state of tolerance by gradually administering small quantities of the antineoplastic drug until the therapeutic dosage is reached. The aim of this study is to assess the effectiveness of oxaliplatin desensitization protocols. A retrospective observational study was carried out between January 2006 and May 2011. The inclusion criteria were patients undergoing chemotherapy treatment with oxaliplatin who had developed an HSR to the drug and who were candidates for continuing the treatment using a desensitization protocol. The patients' clinical records were reviewed and variables were gathered relating to the patient, the treatment, the HSR, and the desensitization protocol administered. The data were analysed using version 18.0 of the statistics program SPSS. A total of 53 desensitization protocols were administered to 21 patients. In 89 % of these cases, no new reactions occurred while the drug was being administered. New reactions of mild severity only occurred in 11 % of cases, and none of these reactions were severe enough for treatment to be stopped. All patients were able to complete the desensitization protocol. This study confirms that oxaliplatin desensitization protocols are safe and effective and allow patients to continue with the treatment that initially caused an HSR.

  19. An energy-efficient MAC protocol using dynamic queue management for delay-tolerant mobile sensor networks.

    Science.gov (United States)

    Li, Jie; Li, Qiyue; Qu, Yugui; Zhao, Baohua

    2011-01-01

    Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC) for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay.

  20. An Energy-Efficient MAC Protocol Using Dynamic Queue Management for Delay-Tolerant Mobile Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yugui Qu

    2011-02-01

    Full Text Available Conventional MAC protocols for wireless sensor network perform poorly when faced with a delay-tolerant mobile network environment. Characterized by a highly dynamic and sparse topology, poor network connectivity as well as data delay-tolerance, delay-tolerant mobile sensor networks exacerbate the severe power constraints and memory limitations of nodes. This paper proposes an energy-efficient MAC protocol using dynamic queue management (EQ-MAC for power saving and data queue management. Via data transfers initiated by the target sink and the use of a dynamic queue management strategy based on priority, EQ-MAC effectively avoids untargeted transfers, increases the chance of successful data transmission, and makes useful data reach the target terminal in a timely manner. Experimental results show that EQ-MAC has high energy efficiency in comparison with a conventional MAC protocol. It also achieves a 46% decrease in packet drop probability, 79% increase in system throughput, and 25% decrease in mean packet delay.

  1. DNA repair protocols

    DEFF Research Database (Denmark)

    Bjergbæk, Lotte

    In its 3rd edition, this Methods in Molecular Biology(TM) book covers the eukaryotic response to genomic insult including advanced protocols and standard techniques in the field of DNA repair. Offers expert guidance for DNA repair, recombination, and replication. Current knowledge of the mechanisms...... that regulate DNA repair has grown significantly over the past years with technology advances such as RNA interference, advanced proteomics and microscopy as well as high throughput screens. The third edition of DNA Repair Protocols covers various aspects of the eukaryotic response to genomic insult including...... recent advanced protocols as well as standard techniques used in the field of DNA repair. Both mammalian and non-mammalian model organisms are covered in the book, and many of the techniques can be applied with only minor modifications to other systems than the one described. Written in the highly...

  2. Verifying compliance with nuclear non-proliferation undertakings: IAEA safeguards agreements and additional protocols

    International Nuclear Information System (INIS)

    2008-06-01

    commonly used, for instance, in shielding on radioactive sources used in hospitals. Other radioactive material, such as most radioactive sources and isotopes used in medicine, industry, agriculture, and water resource management, are not the subject of safeguards and need not be reported to the IAEA under safeguards agreements. Reporting depends on the level of nuclear activity in the country. Declarations pursuant to safeguards agreements and additional protocols for States that do not have nuclear facilities are expected to be short and simple. The IAEA has prepared a document, available upon request, which provides guidance on the reporting requirements for such States. More elaborate guidelines have been prepared for States that do have nuclear facilities subject to routine safeguards inspections. Through its activities in the field, the IAEA seeks to verify the correctness and completeness of States' reports and declarations regarding nuclear material. Each State with a comprehensive safeguards agreement is required to establish and maintain a State system of accounting for and control of nuclear material (SSAC), which is the national authority formally designated to keep track of nuclear material and activities in the country. For all States with safeguards agreements in force, the IAEA draws an annual conclusion on the non-diversion of nuclear material and other items placed under safeguard. The IAEA's focal point for the negotiation of safeguards agreements and additional protocols, and the amendment of SQPs, is the Office of External Relations and Policy Coordination. Once a State has decided to conclude such an agreement and/or protocol, or amend its SQP, the IAEA can help the country with the implementation of related legal and technical requirements. The appendix of this publication informs how to conclude a comprehensive Safeguards Agreement and/or an Additional Protocol and provides 3 model notification letters for (a) conclusion of a safeguards agreement, a

  3. Verifying compliance with nuclear non-proliferation undertakings: IAEA safeguards agreements and additional protocols

    International Nuclear Information System (INIS)

    2008-04-01

    commonly used, for instance, in shielding on radioactive sources used in hospitals. Other radioactive material, such as most radioactive sources and isotopes used in medicine, industry, agriculture, and water resource management, are not the subject of safeguards and need not be reported to the IAEA under safeguards agreements. Reporting depends on the level of nuclear activity in the country. Declarations pursuant to safeguards agreements and additional protocols for States that do not have nuclear facilities are expected to be short and simple. The IAEA has prepared a document, available upon request, which provides guidance on the reporting requirements for such States. More elaborate guidelines have been prepared for States that do have nuclear facilities subject to routine safeguards inspections. Through its activities in the field, the IAEA seeks to verify the correctness and completeness of States' reports and declarations regarding nuclear material. Each State with a comprehensive safeguards agreement is required to establish and maintain a State system of accounting for and control of nuclear material (SSAC), which is the national authority formally designated to keep track of nuclear material and activities in the country. For all States with safeguards agreements in force, the IAEA draws an annual conclusion on the non-diversion of nuclear material and other items placed under safeguard. The IAEA's focal point for the negotiation of safeguards agreements and additional protocols, and the amendment of SQPs, is the Office of External Relations and Policy Coordination. Once a State has decided to conclude such an agreement and/or protocol, or amend its SQP, the IAEA can help the country with the implementation of related legal and technical requirements. The appendix of this publication informs how to conclude a comprehensive Safeguards Agreement and/or an Additional Protocol and provides 3 model notification letters for (a) conclusion of a safeguards agreement, a

  4. Phase Transition in Protocols Minimizing Work Fluctuations

    Science.gov (United States)

    Solon, Alexandre P.; Horowitz, Jordan M.

    2018-05-01

    For two canonical examples of driven mesoscopic systems—a harmonically trapped Brownian particle and a quantum dot—we numerically determine the finite-time protocols that optimize the compromise between the standard deviation and the mean of the dissipated work. In the case of the oscillator, we observe a collection of protocols that smoothly trade off between average work and its fluctuations. However, for the quantum dot, we find that as we shift the weight of our optimization objective from average work to work standard deviation, there is an analog of a first-order phase transition in protocol space: two distinct protocols exchange global optimality with mixed protocols akin to phase coexistence. As a result, the two types of protocols possess qualitatively different properties and remain distinct even in the infinite duration limit: optimal-work-fluctuation protocols never coalesce with the minimal-work protocols, which therefore never become quasistatic.

  5. Generalized routing protocols for multihop relay networks

    KAUST Repository

    Khan, Fahd Ahmed

    2011-07-01

    Performance of multihop cooperative networks depends on the routing protocols employed. In this paper we propose the last-n-hop selection protocol, the dual path protocol, the forward-backward last-n-hop selection protocol and the forward-backward dual path protocol for the routing of data through multihop relay networks. The average symbol error probability performance of the schemes is analysed by simulations. It is shown that close to optimal performance can be achieved by using the last-n-hop selection protocol and its forward-backward variant. Furthermore we also compute the complexity of the protocols in terms of number of channel state information required and the number of comparisons required for routing the signal through the network. © 2011 IEEE.

  6. Optimizing Staining Protocols for Laser Microdissection of Specific Cell Types from the Testis Including Carcinoma In Situ

    DEFF Research Database (Denmark)

    Sonne, Si Brask; Dalgaard, Marlene D; Nielsen, John Erik

    2009-01-01

    Microarray and RT-PCR based methods are important tools for analysis of gene expression; however, in tissues containing many different cells types, such as the testis, characterization of gene expression in specific cell types can be severely hampered by noise from other cells. The laser microdis......Microarray and RT-PCR based methods are important tools for analysis of gene expression; however, in tissues containing many different cells types, such as the testis, characterization of gene expression in specific cell types can be severely hampered by noise from other cells. The laser...... protocols, and present two staining protocols for frozen sections, one for fast and specific staining of fetal germ cells, testicular carcinoma in situ cells, and other cells with embryonic stem cell-like properties that express the alkaline phosphatase, and one for specific staining of lipid droplet...

  7. Bioinspired Security Analysis of Wireless Protocols

    DEFF Research Database (Denmark)

    Petrocchi, Marinella; Spognardi, Angelo; Santi, Paolo

    2016-01-01

    work, this paper investigates feasibility of adopting fraglets as model for specifying security protocols and analysing their properties. In particular, we give concrete sample analyses over a secure RFID protocol, showing evolution of the protocol run as chemical dynamics and simulating an adversary...

  8. A security analysis of the 802.11s wireless mesh network routing protocol and its secure routing protocols.

    Science.gov (United States)

    Tan, Whye Kit; Lee, Sang-Gon; Lam, Jun Huy; Yoo, Seong-Moo

    2013-09-02

    Wireless mesh networks (WMNs) can act as a scalable backbone by connecting separate sensor networks and even by connecting WMNs to a wired network. The Hybrid Wireless Mesh Protocol (HWMP) is the default routing protocol for the 802.11s WMN. The routing protocol is one of the most important parts of the network, and it requires protection, especially in the wireless environment. The existing security protocols, such as the Broadcast Integrity Protocol (BIP), Counter with cipher block chaining message authentication code protocol (CCMP), Secure Hybrid Wireless Mesh Protocol (SHWMP), Identity Based Cryptography HWMP (IBC-HWMP), Elliptic Curve Digital Signature Algorithm HWMP (ECDSA-HWMP), and Watchdog-HWMP aim to protect the HWMP frames. In this paper, we have analyzed the vulnerabilities of the HWMP and developed security requirements to protect these identified vulnerabilities. We applied the security requirements to analyze the existing secure schemes for HWMP. The results of our analysis indicate that none of these protocols is able to satisfy all of the security requirements. We also present a quantitative complexity comparison among the protocols and an example of a security scheme for HWMP to demonstrate how the result of our research can be utilized. Our research results thus provide a tool for designing secure schemes for the HWMP.

  9. Maximally efficient protocols for direct secure quantum communication

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Anindita [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); Department of Physics and Center for Astroparticle Physics and Space Science, Bose Institute, Block EN, Sector V, Kolkata 700091 (India); Pathak, Anirban, E-mail: anirban.pathak@jiit.ac.in [Department of Physics and Materials Science Engineering, Jaypee Institute of Information Technology, A-10, Sector-62, Noida, UP-201307 (India); RCPTM, Joint Laboratory of Optics of Palacky University and Institute of Physics of Academy of Science of the Czech Republic, Faculty of Science, Palacky University, 17. Listopadu 12, 77146 Olomouc (Czech Republic)

    2012-10-01

    Two protocols for deterministic secure quantum communication (DSQC) using GHZ-like states have been proposed. It is shown that one of these protocols is maximally efficient and that can be modified to an equivalent protocol of quantum secure direct communication (QSDC). Security and efficiency of the proposed protocols are analyzed and compared. It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. Maximally efficient QSDC protocols are shown to be more efficient than their DSQC counterparts. This additional efficiency arises at the cost of message transmission rate. -- Highlights: ► Two protocols for deterministic secure quantum communication (DSQC) are proposed. ► One of the above protocols is maximally efficient. ► It is modified to an equivalent protocol of quantum secure direct communication (QSDC). ► It is shown that dense coding is sufficient but not essential for DSQC and QSDC protocols. ► Efficient QSDC protocols are always more efficient than their DSQC counterparts.

  10. A secure distributed logistic regression protocol for the detection of rare adverse drug events.

    Science.gov (United States)

    El Emam, Khaled; Samet, Saeed; Arbuckle, Luk; Tamblyn, Robyn; Earle, Craig; Kantarcioglu, Murat

    2013-05-01

    There is limited capacity to assess the comparative risks of medications after they enter the market. For rare adverse events, the pooling of data from multiple sources is necessary to have the power and sufficient population heterogeneity to detect differences in safety and effectiveness in genetic, ethnic and clinically defined subpopulations. However, combining datasets from different data custodians or jurisdictions to perform an analysis on the pooled data creates significant privacy concerns that would need to be addressed. Existing protocols for addressing these concerns can result in reduced analysis accuracy and can allow sensitive information to leak. To develop a secure distributed multi-party computation protocol for logistic regression that provides strong privacy guarantees. We developed a secure distributed logistic regression protocol using a single analysis center with multiple sites providing data. A theoretical security analysis demonstrates that the protocol is robust to plausible collusion attacks and does not allow the parties to gain new information from the data that are exchanged among them. The computational performance and accuracy of the protocol were evaluated on simulated datasets. The computational performance scales linearly as the dataset sizes increase. The addition of sites results in an exponential growth in computation time. However, for up to five sites, the time is still short and would not affect practical applications. The model parameters are the same as the results on pooled raw data analyzed in SAS, demonstrating high model accuracy. The proposed protocol and prototype system would allow the development of logistic regression models in a secure manner without requiring the sharing of personal health information. This can alleviate one of the key barriers to the establishment of large-scale post-marketing surveillance programs. We extended the secure protocol to account for correlations among patients within sites through

  11. Physical Therapy Protocols for Arthroscopic Bankart Repair.

    Science.gov (United States)

    DeFroda, Steven F; Mehta, Nabil; Owens, Brett D

    Outcomes after arthroscopic Bankart repair can be highly dependent on compliance and participation in physical therapy. Additionally, there are many variations in physician-recommended physical therapy protocols. The rehabilitation protocols of academic orthopaedic surgery departments vary widely despite the presence of consensus protocols. Descriptive epidemiology study. Level 3. Web-based arthroscopic Bankart rehabilitation protocols available online from Accreditation Council for Graduate Medical Education (ACGME)-accredited orthopaedic surgery programs were included for review. Individual protocols were reviewed to evaluate for the presence or absence of recommended therapies, goals for completion of ranges of motion, functional milestones, exercise start times, and recommended time to return to sport. Thirty protocols from 27 (16.4%) total institutions were identified out of 164 eligible for review. Overall, 9 (30%) protocols recommended an initial period of strict immobilization. Variability existed between the recommended time periods for sling immobilization (mean, 4.8 ± 1.8 weeks). The types of exercises and their start dates were also inconsistent. Goals to full passive range of motion (mean, 9.2 ± 2.8 weeks) and full active range of motion (mean, 12.2 ± 2.8 weeks) were consistent with other published protocols; however, wide ranges existed within the reviewed protocols as a whole. Only 10 protocols (33.3%) included a timeline for return to sport, and only 3 (10%) gave an estimate for return to game competition. Variation also existed when compared with the American Society of Shoulder and Elbow Therapists' (ASSET) consensus protocol. Rehabilitation protocols after arthroscopic Bankart repair were found to be highly variable. They also varied with regard to published consensus protocols. This discrepancy may lead to confusion among therapists and patients. This study highlights the importance of attending surgeons being very clear and specific with

  12. A Transcription and Translation Protocol for Sensitive Cross-Cultural Team Research.

    Science.gov (United States)

    Clark, Lauren; Birkhead, Ana Sanchez; Fernandez, Cecilia; Egger, Marlene J

    2017-10-01

    Assurance of transcript accuracy and quality in interview-based qualitative research is foundational for data accuracy and study validity. Based on our experience in a cross-cultural ethnographic study of women's pelvic organ prolapse, we provide practical guidance to set up step-by-step interview transcription and translation protocols for team-based research on sensitive topics. Beginning with team decisions about level of detail in transcription, completeness, and accuracy, we operationalize the process of securing vendors to deliver the required quality of transcription and translation. We also share rubrics for assessing transcript quality and the team protocol for managing transcripts (assuring consistency of format, insertion of metadata, anonymization, and file labeling conventions) and procuring an acceptable initial translation of Spanish-language interviews. Accurate, complete, and systematically constructed transcripts in both source and target languages respond to the call for more transparency and reproducibility of scientific methods.

  13. The Network Protocol Analysis Technique in Snort

    Science.gov (United States)

    Wu, Qing-Xiu

    Network protocol analysis is a network sniffer to capture data for further analysis and understanding of the technical means necessary packets. Network sniffing is intercepted by packet assembly binary format of the original message content. In order to obtain the information contained. Required based on TCP / IP protocol stack protocol specification. Again to restore the data packets at protocol format and content in each protocol layer. Actual data transferred, as well as the application tier.

  14. Relationships of clinical protocols and reconstruction kernels with image quality and radiation dose in a 128-slice CT scanner: Study with an anthropomorphic and water phantom

    International Nuclear Information System (INIS)

    Paul, Jijo; Krauss, B.; Banckwitz, R.; Maentele, W.; Bauer, R.W.; Vogl, T.J.

    2012-01-01

    Research highlights: ► Clinical protocol, reconstruction kernel, reconstructed slice thickness, phantom diameter or the density of material it contains directly affects the image quality of DSCT. ► Dual energy protocol shows the lowest DLP compared to all other protocols examined. ► Dual-energy fused images show excellent image quality and the noise is same as that of single- or high-pitch mode protocol images. ► Advanced CT technology improves image quality and considerably reduce radiation dose. ► An important finding is the comparatively higher DLP of the dual-source high-pitch protocol compared to other single- or dual-energy protocols. - Abstract: Purpose: The aim of this study was to explore the relationship of scanning parameters (clinical protocols), reconstruction kernels and slice thickness with image quality and radiation dose in a DSCT. Materials and methods: The chest of an anthropomorphic phantom was scanned on a DSCT scanner (Siemens Somatom Definition flash) using different clinical protocols, including single- and dual-energy modes. Four scan protocols were investigated: 1) single-source 120 kV, 110 mA s, 2) single-source 100 kV, 180 mA s, 3) high-pitch 120 kV, 130 mA s and 4) dual-energy with 100/Sn140 kV, eff.mA s 89, 76. The automatic exposure control was switched off for all the scans and the CTDIvol selected was in between 7.12 and 7.37 mGy. The raw data were reconstructed using the reconstruction kernels B31f, B80f and B70f, and slice thicknesses were 1.0 mm and 5.0 mm. Finally, the same parameters and procedures were used for the scanning of water phantom. Friedman test and Wilcoxon-Matched-Pair test were used for statistical analysis. Results: The DLP based on the given CTDIvol values showed significantly lower exposure for protocol 4, when compared to protocol 1 (percent difference 5.18%), protocol 2 (percent diff. 4.51%), and protocol 3 (percent diff. 8.81%). The highest change in Hounsfield Units was observed with dual

  15. Welfare Quality assessment protocol for laying hens = Welfare Quality assessment protocol voor leghennen

    NARCIS (Netherlands)

    Niekerk, van T.G.C.M.; Gunnink, H.; Reenen, van C.G.

    2012-01-01

    Results of a study on the Welfare Quality® assessment protocol for laying hens. It reports the development of the integration of welfare assessment as scores per criteria as well as simplification of the Welfare Quality® assessment protocol. Results are given from assessment of 122 farms.

  16. Characterization of the primary source of electrons in linear accelerators in clinical use; Caracterizacion de la fuente primaria de electrones en aceleradores lineales de uso clinico

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Extremera, M.; Gonzalez Infantes, W.; Lallena rojo, A. M.; Anguiano Millan, M.

    2013-07-01

    Monte Carlo simulation is currently considered the most accurate method for calculations of doses due to electrons. The objective of the work is the characterization of the primary source of electrons from an accelerator of clinical use with Monte Carlo simulation, in order to build a model of sources involving a substantial saving of time of calculation in the simulation of treatment. (Author)

  17. Publication trends of study protocols in rehabilitation.

    Science.gov (United States)

    Jesus, Tiago S; Colquhoun, Heather L

    2017-09-04

    Growing evidence points for the need to publish study protocols in the health field. To observe whether the growing interest in publishing study protocols in the broader health field has been translated into increased publications of rehabilitation study protocols. Observational study using publication data and its indexation in PubMed. Not applicable. Not applicable. PubMed was searched with appropriate combinations of Medical Subject Headings up to December 2014. The effective presence of study protocols was manually screened. Regression models analyzed the yearly growth of publications. Two-sample Z-tests analyzed whether the proportion of Systematic Reviews (SRs) and Randomized Controlled Trials (RCTs) among study protocols differed from that of the same designs for the broader rehabilitation research. Up to December 2014, 746 publications of rehabilitation study protocols were identified, with an exponential growth since 2005 (r2=0.981; p<0.001). RCT protocols were the most common among rehabilitation study protocols (83%), while RCTs were significantly more prevalent among study protocols than among the broader rehabilitation research (83% vs. 35.8%; p<0.001). For SRs, the picture was reversed: significantly less common among study protocols (2.8% vs. 9.3%; p<0.001). Funding was more often reported by rehabilitation study protocols than the broader rehabilitation research (90% vs. 53.1%; p<0.001). Rehabilitation journals published a significantly lower share of rehabilitation study protocols than they did for the broader rehabilitation research (1.8% vs.16.7%; p<0.001). Identifying the reasons for these discrepancies and reverting unwarranted disparities (e.g. low rate of publication for rehabilitation SR protocols) are likely new avenues for rehabilitation research and its publication. SRs, particularly those aggregating RCT results, are considered the best standard of evidence to guide rehabilitation clinical practice; however, that standard can be improved

  18. Hydrocyanation of sulfonylimines using potassium hexacyanoferrate(II) as an eco-friendly cyanide source

    International Nuclear Information System (INIS)

    Li, Zheng; Li, Rongzhi; Zheng, Huanhuan; Wen, Fei; Li, Hongbo; Yin, Junjun; Yang, Jingya

    2013-01-01

    An efficient and eco-friendly method for hydrocyanation of sulfonylimines via one-pot two-step procedure using potassium hexacyanoferrate)II) as cyanide source, benzoyl chloride as a promoter, and potassium carbonate as a base is described. This protocol has the features of using nontoxic, nonvolatile and inexpensive cyanide source, high yield, and simple work-up procedure. (author)

  19. Hydrocyanation of sulfonylimines using potassium hexacyanoferrate(II) as an eco-friendly cyanide source

    Energy Technology Data Exchange (ETDEWEB)

    Li, Zheng; Li, Rongzhi; Zheng, Huanhuan; Wen, Fei; Li, Hongbo; Yin, Junjun; Yang, Jingya, E-mail: lizheng@nwnu.edu.cn [Key Laboratory of Eco-Environment-Related Polymer Materials for Ministry of Education, College of Chemistry and Chemical Engineering, Northwest Normal University, Gansu (China)

    2013-11-15

    An efficient and eco-friendly method for hydrocyanation of sulfonylimines via one-pot two-step procedure using potassium hexacyanoferrate)II) as cyanide source, benzoyl chloride as a promoter, and potassium carbonate as a base is described. This protocol has the features of using nontoxic, nonvolatile and inexpensive cyanide source, high yield, and simple work-up procedure. (author)

  20. Characterization of PM10 sources in the central Mediterranean

    Science.gov (United States)

    Calzolai, G.; Nava, S.; Lucarelli, F.; Chiari, M.; Giannoni, M.; Becagli, S.; Traversi, R.; Marconi, M.; Frosini, D.; Severi, M.; Udisti, R.; di Sarra, A.; Pace, G.; Meloni, D.; Bommarito, C.; Monteleone, F.; Anello, F.; Sferlazzo, D. M.

    2015-12-01

    The Mediterranean Basin atmosphere is influenced by both strong natural and anthropogenic aerosol emissions and is also subject to important climatic forcings. Several programs have addressed the study of the Mediterranean basin; nevertheless important pieces of information are still missing. In this framework, PM10 samples were collected on a daily basis on the island of Lampedusa (35.5° N, 12.6° E; 45 m a.s.l.), which is far from continental pollution sources (the nearest coast, in Tunisia, is more than 100 km away). After mass gravimetric measurements, different portions of the samples were analyzed to determine the ionic content by ion chromatography (IC), the soluble metals by inductively coupled plasma atomic emission spectrometry (ICP-AES), and the total (soluble + insoluble) elemental composition by particle-induced x-ray emission (PIXE). Data from 2007 and 2008 are used in this study. The Positive Matrix Factorization (PMF) model was applied to the 2-year long data set of PM10 mass concentration and chemical composition to assess the aerosol sources affecting the central Mediterranean basin. Seven sources were resolved: sea salt, mineral dust, biogenic emissions, primary particulate ship emissions, secondary sulfate, secondary nitrate, and combustion emissions. Source contributions to the total PM10 mass were estimated to be about 40 % for sea salt, around 25 % for mineral dust, 10 % each for secondary nitrate and secondary sulfate, and 5 % each for primary particulate ship emissions, biogenic emissions, and combustion emissions. Large variations in absolute and relative contributions are found and appear to depend on the season and on transport episodes. In addition, the secondary sulfate due to ship emissions was estimated and found to contribute by about one-third to the total sulfate mass. Results for the sea-salt and mineral dust sources were compared with estimates of the same contributions obtained from independent approaches, leading to an