WorldWideScience

Sample records for codes fram mga

  1. The FRAM code: Description and some comparisons with MGA

    Sampson, T.E.; Kelley, T.A.

    1994-01-01

    The authors describe the initial development of the FRAM gamma-ray spectrometry code for analyzing plutonium isotopics, discuss its methodology, and present some comparisons with MGA on identical items. They also present some of the features of a new Windows 3.1-based version (PC/FRAM) and describe some current measurement problems. Development of the FRAM code began in about 1985, growing out of the need at the Los Alamos TA-55 Plutonium Facility for an isotopic analysis code to give accurate results for the effective specific power of heterogeneous (Am/Pu) pyrochemical residues. These residues present a difficult challenge because the americium is present mostly in a low-Z salt matrix (AmCl 3 ) with fines and small pieces of plutonium metal dispersed throughout the salt. Plutonium gamma rays suffer different attenuation than americium gamma rays of the same energy; this makes conventional analysis with a single relative efficiency function inaccurate for Am/Pu ratios and affects the analysis in other subtle ways

  2. Comparison of three gamma ray isotopic determination codes: FRAM, MGA, and TRIFID

    Cremers, T.L.; Malcom, J.E.; Bonner, C.A.

    1994-01-01

    The determination of the isotopic distribution of plutonium and the americium concentration is required for the assay of nuclear material by calorimetry or neutron coincidence counting. The isotopic information is used in calorimetric assay to compute the effective specific power from the measured isotopic fractions and the known specific power of each isotope. The effective specific power is combined with the heat measurement to obtain the mass of plutonium in the assayed nuclear material. The response of neutron coincidence counters is determined by the 240 Pu isotopic fraction with contributions from the other even plutonium isotopes. The effect of the 240 Pu isotopic fraction and the other neutron contributing isotopes are combined as 240 Pu effective. This is used to calculate the mass of nuclear material from the neutron counting data in a manner analogous to the effective specific power in calorimeter. Comparisons of the precision and accuracy of calorimetric assay and neutron coincidence counting often focus only on the precision and accuracy of the heat measurement (calorimetry) compared to the precision and accuracy of the neutron coincidence counting statistics. The major source of uncertainty for both calorimetric assay and neutron coincidence counting often lies in the determination of the plutonium isotopic distribution ad determined by gamma ray spectroscopy. Thus, the selection of the appropriate isotopic distribution code is of paramount importance to good calorimetric assay and neutron coincidence counting. Three gamma ray isotopic distribution codes, FRAM, MGA, and TRIFID have been compared at the Los Alamos Plutonium Facility under carefully controlled conditions of similar count rates, count times, and 240 Pu isotopic fraction

  3. Achievements in testing of the MGA and FRAM isotopic software codes under the DOE/NNSA-IRSN cooperation of gamma-ray isotopic measurement systems

    Vo, Duc; Wang, Tzu-Fang; Funk, Pierre; Weber, Anne-Laure; Pepin, Nicolas; Karcher, Anna

    2009-01-01

    DOE/NNSA and IRSN collaborated on a study of gamma-ray instruments and analysis methods used to perform isotopic measurements of special nuclear materials. The two agencies agreed to collaborate on the project in response to inconsistencies that were found in the various versions of software and hardware used to determine the isotopic abundances of uranium and plutonium. IRSN used software developed internally to test the MGA and FRAM isotopic analysis codes for criteria used to stop data acquisition. The stop-criterion test revealed several unusual behaviors in both the MGA and FRAM software codes.

  4. Uranium Isotopic Analysis with the FRAM Isotopic Analysis Code

    Vo, D.T.; Sampson, T.E.

    1999-01-01

    FRAM is the acronym for Fixed-Energy Response-Function Analysis with Multiple efficiency. This software was developed at Los Alamos National Laboratory originally for plutonium isotopic analysis. Later, it was adapted for uranium isotopic analysis in addition to plutonium. It is a code based on a self-calibration using several gamma-ray peaks for determining the isotopic ratios. The versatile-parameter database structure governs all facets of the data analysis. User editing of the parameter sets allows great flexibility in handling data with different isotopic distributions, interfering isotopes, and different acquisition parameters such as energy calibration and detector type

  5. Measurement of Plutonium Isotopic Composition - MGA

    Vo, Duc Ta [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-21

    In this module, we will use the Canberra InSpector-2000 Multichannel Analyzer with a high-purity germanium detector (HPGe) and the MGA isotopic anlysis software to assay a variety of plutonium samples. The module provides an understanding of the MGA method, its attributes and limitations. You will assess the system performance by measuring a range of materials similar to those you may assay in your work. During the final verification exercise, the results from MGA will be combined with the 240Pueff results from neutron coincidence or multiplicity counters so that measurements of the plutonium mass can be compared with the operator-declared (certified) values.

  6. Varyasyong Leksikal sa mga Dayalektong Mandaya

    Dr. Raymund M. Pasion

    2014-12-01

    Full Text Available Layuning panlahat sa pag-aaral na ito na tuklasin ang varyasyong leksikalsa Wikang Mandaya na matatagpuan sa Probinsyang Davao Oriental. Bilang lunsaran sa paglilikom ng mga datos, ginamit ang mga terminong kultural na pangkabuhayan tulad ng pagsasaka, pangangaso, pangingisda at paghahayupan nanababatay sa Indigenous Knowledge System and Practices (IKSP.Sinikap sagutin sa pagaaral ang suliraning ano-anong varyasyong liksikal ang makikita sa mga terminong kultural na pangkabuhayan ng Mandaya na makikita sa munisipalidad ng Caraga, Manay, Bagangaat Cateel? Disenyong kwalitatibo ginamit.Metodong indehinusat deskriptibo naman ang ginamit mula sa paglilikom hanggang sa pag-aanalisa ng mga datos. Samantalang, ang mga impormante ay pinilisa pamamagitan ng kombinasyong purposive at snow-ball sampling. Natuklasan, na ang wikang Mandaya ay nakitaan ng varyasyong lekisikal ayon sa magkakaiba ang anyo, may pagkakatulad ang anyo, at magkakatulad ang anyo subalit magkakaiba ang bigkas. Gayunpaman, pinaniniwalaang dahil sa paktor na heograpikal, sikolohikal at sosyolohikal na nagaganap sa kanilang kultura ay hindi rin maipagkailang nagyari ang varyasyong leksikal na aspekto nito.

  7. The Fram Strait integrated ocean observatory

    Fahrbach, E.; Beszczynska-Möller, A.; Rettig, S.; Rohardt, G.; Sagen, H.; Sandven, S.; Hansen, E.

    2012-04-01

    A long-term oceanographic moored array has been operated since 1997 to measure the ocean water column properties and oceanic advective fluxes through Fram Strait. While the mooring line along 78°50'N is devoted to monitoring variability of the physical environment, the AWI Hausgarten observatory, located north of it, focuses on ecosystem properties and benthic biology. Under the EU DAMOCLES and ACOBAR projects, the oceanographic observatory has been extended towards the innovative integrated observing system, combining the deep ocean moorings, multipurpose acoustic system and a network of gliders. The main aim of this system is long-term environmental monitoring in Fram Strait, combining satellite data, acoustic tomography, oceanographic measurements at moorings and glider sections with high-resolution ice-ocean circulation models through data assimilation. In future perspective, a cable connection between the Hausgarten observatory and a land base on Svalbard is planned as the implementation of the ESONET Arctic node. To take advantage of the planned cabled node, different technologies for the underwater data transmission were reviewed and partially tested under the ESONET DM AOEM. The main focus was to design and evaluate available technical solutions for collecting data from different components of the Fram Strait ocean observing system, and an integration of available data streams for the optimal delivery to the future cabled node. The main components of the Fram Strait integrated observing system will be presented and the current status of available technologies for underwater data transfer will be reviewed. On the long term, an initiative of Helmholtz observatories foresees the interdisciplinary Earth-Observing-System FRAM which combines observatories such as the long term deep-sea ecological observatory HAUSGARTEN, the oceanographic Fram Strait integrated observing system and the Svalbard coastal stations maintained by the Norwegian ARCTOS network. A vision

  8. Recent improvements in plutonium gamma-ray analysis using MGA

    Ruhter, W.D.; Gunnink, R.

    1992-06-01

    MGA is a gamma-ray spectrum analysis program for determining relative plutonium isotopic abundances. It can determine plutonium isotopic abundances better than 1% using a high-resolution, low-energy, planar germanium detector and measurement times ten minutes or less. We have modified MGA to allow determination of absolute plutonium isotopic abundances in solutions. With calibration of a detector using a known solution concentration in a well-defined sample geometry, plutonium solution concentrations can be determined. MGA can include analysis of a second spectrum of the high-energy spectrum to include determination of fission product abundances relative to total plutonium. For the high-energy gamma-ray measurements we have devised a new hardware configuration, so that both the low- and high-energy gamma-ray detectors are mounted in a single cryostat thereby reducing weight and volume of the detector systems. We describe the detector configuration, and the performance of the MGA program for determining plutonium concentrations in solutions and fission product abundances

  9. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  10. FRAM Modelling Complex Socio-technical Systems

    Hollnagel, Erik

    2012-01-01

    There has not yet been a comprehensive method that goes behind 'human error' and beyond the failure concept, and various complicated accidents have accentuated the need for it. The Functional Resonance Analysis Method (FRAM) fulfils that need. This book presents a detailed and tested method that can be used to model how complex and dynamic socio-technical systems work, and understand both why things sometimes go wrong but also why they normally succeed.

  11. FRAM telescope - monitoring of atmospheric extinction and variable star photometry

    Jurysek, J.; Honkova, K.; Masek, M.

    2015-02-01

    The FRAM (F/(Ph)otometric Robotic Atmospheric Monitor) telescope is a part of the Pierre Auger Observatory (PAO) located near town Malargüe in Argentina. The main task of the FRAM telescope is the continuous night - time monitoring of the atmospheric extinction and its wavelength dependence. The current methodology of the measurement of a atmospheric extinction and for instrumentation properties also allows simultaneous observation of other interesting astronomical targets. The current observations of the FRAM telescope are focused on the photometry of eclipsing binaries, positional refinement of minor bodies of the Solar system and observations of optical counterparts of gamma ray bursts. In this contribution, we briefly describe the main purpose of the FRAM telescope for the PAO and we also present its current astrono mical observing program.

  12. Aerosol Measurements with the FRAM Telescope

    Ebr Jan

    2017-01-01

    Full Text Available Precision stellar photometry using a telescope equipped with a CCD camera is an obvious way to measure the total aerosol content of the atmosphere as the apparent brightness of every star is affected by scattering. Achieving high precision in the vertical aerosol optical depth (at the level of 0.01 presents a series of interesting challenges. Using 3.5 years of data taken by the FRAM instrument at the Pierre Auger Observatory, we have developed a set of methods and tools to overcome most of these challenges. We use a wide-field camera and measure stars over a large span in airmass to eliminate the need for absolute calibration of the instrument. The main issues for data processing include camera calibration, source identification in curved field, catalog deficiencies, automated aperture photometry in rich fields with lens distortion and corrections for star color. In the next step, we model the airmass-dependence of the extinction and subtract the Rayleigh component of scattering, using laboratory measurements of spectral sensitivity of the device. In this contribution, we focus on the caveats and solutions found during the development of the methods, as well as several issues yet to be solved. Finally, future outlooks, such as the possibility for precision measurements of wavelength dependence of the extinction are discussed.

  13. Mga2 transcription factor regulates an oxygen-responsive lipid homeostasis pathway in fission yeast

    Burr, Risa; Stewart, Emerson V; Shao, Wei

    2016-01-01

    -binding protein (SREBP) transcription factors regulate lipid homeostasis. In mammals, SREBP-2 controls cholesterol biosynthesis, whereas SREBP-1 controls triacylglycerol and glycerophospholipid biosynthesis. In the fission yeast Schizosaccharomyces pombe, the SREBP-2 homolog Sre1 regulates sterol homeostasis....... In the absence of mga2, fission yeast exhibited growth defects under both normoxia and low oxygen conditions. Mga2 transcriptional targets were enriched for lipid metabolism genes, and mga2Δ cells showed disrupted triacylglycerol and glycerophospholipid homeostasis, most notably with an increase in fatty acid...

  14. A practical MGA-ARIMA model for forecasting real-time dynamic rain-induced attenuation

    Gong, Shuhong; Gao, Yifeng; Shi, Houbao; Zhao, Ge

    2013-05-01

    novel and practical modified genetic algorithm (MGA)-autoregressive integrated moving average (ARIMA) model for forecasting real-time dynamic rain-induced attenuation has been established by combining genetic algorithm ideas with the ARIMA model. It is proved that due to the introduction of MGA into the ARIMA(1,1,7) model, the MGA-ARIMA model has the potential to be conveniently applied in every country or area by creating a parameter database used by the ARIMA(1,1,7) model. The parameter database is given in this paper based on attenuation data measured in Xi'an, China. The methods to create the parameter databases in other countries or areas are offered, too. Based on the experimental results, the MGA-ARIMA model has been proved practical for forecasting dynamic rain-induced attenuation in real time. The novel model given in this paper is significant for developing adaptive fade mitigation technologies at millimeter wave bands.

  15. PC/FRAM, Version 3.2 User Manual

    Kelley, T.A.; Sampson, T.E.

    1999-01-01

    This manual describes the use of version 3.2 of the PC/FRAM plutonium isotopic analysis software developed in the Safeguards Science and Technology Group, NE-5, Nonproliferation and International Security Division Los Alamos National Laboratory. The software analyzes the gamma ray spectrum from plutonium-bearing items and determines the isotopic distribution of the plutonium 241Am content and concentration of other isotopes in the item. The software can also determine the isotopic distribution of uranium isotopes in items containing only uranium. The body of this manual descenies the generic version of the code. Special facility-specific enhancements, if they apply, will be described in the appendices. The information in this manual applies equally well to version 3.3, which has been licensed to ORTEC. The software can analyze data that is stored in a file on disk. It understands several storage formats including Canberra's S1OO format, ORTEC'S 'chn' and 'SPC' formats, and several ASCII text formats. The software can also control data acquisition using an MCA and then store the results in a file on disk for later analysis or analyze the spectrum directly after the acquisition. The software currently only supports the control of ORTEC MCB'S. Support for Canbema's Genie-2000 Spectroscopy Systems will be added in the future. Support for reading and writing CAM files will also be forthcoming. A versatile parameter fde database structure governs all facets of the data analysis. User editing of the parameter sets allows great flexibility in handling data with different isotopic distributions, interfering isotopes, and different acquisition parameters such as energy calibration, and detector type. This manual is intended for the system supervisor or the local user who is to be the resident expert. Excerpts from this manual may also be appropriate for the system operator who will routinely use the instrument

  16. WebMGA: a customizable web server for fast metagenomic sequence analysis.

    Wu, Sitao; Zhu, Zhengwei; Fu, Liming; Niu, Beifang; Li, Weizhong

    2011-09-07

    The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  17. Coordinate Regulation of Yeast Sterol Regulatory Element-binding Protein (SREBP) and Mga2 Transcription Factors.

    Burr, Risa; Stewart, Emerson V; Espenshade, Peter J

    2017-03-31

    The Mga2 and Sre1 transcription factors regulate oxygen-responsive lipid homeostasis in the fission yeast Schizosaccharomyces pombe in a manner analogous to the mammalian sterol regulatory element-binding protein (SREBP)-1 and SREBP-2 transcription factors. Mga2 and SREBP-1 regulate triacylglycerol and glycerophospholipid synthesis, whereas Sre1 and SREBP-2 regulate sterol synthesis. In mammals, a shared activation mechanism allows for coordinate regulation of SREBP-1 and SREBP-2. In contrast, distinct pathways activate fission yeast Mga2 and Sre1. Therefore, it is unclear whether and how these two related pathways are coordinated to maintain lipid balance in fission yeast. Previously, we showed that Sre1 cleavage is defective in the absence of mga2 Here, we report that this defect is due to deficient unsaturated fatty acid synthesis, resulting in aberrant membrane transport. This defect is recapitulated by treatment with the fatty acid synthase inhibitor cerulenin and is rescued by addition of exogenous unsaturated fatty acids. Furthermore, sterol synthesis inhibition blocks Mga2 pathway activation. Together, these data demonstrate that Sre1 and Mga2 are each regulated by the lipid product of the other transcription factor pathway, providing a source of coordination for these two branches of lipid synthesis. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  18. WebMGA: a customizable web server for fast metagenomic sequence analysis

    Niu Beifang

    2011-09-01

    Full Text Available Abstract Background The new field of metagenomics studies microorganism communities by culture-independent sequencing. With the advances in next-generation sequencing techniques, researchers are facing tremendous challenges in metagenomic data analysis due to huge quantity and high complexity of sequence data. Analyzing large datasets is extremely time-consuming; also metagenomic annotation involves a wide range of computational tools, which are difficult to be installed and maintained by common users. The tools provided by the few available web servers are also limited and have various constraints such as login requirement, long waiting time, inability to configure pipelines etc. Results We developed WebMGA, a customizable web server for fast metagenomic analysis. WebMGA includes over 20 commonly used tools such as ORF calling, sequence clustering, quality control of raw reads, removal of sequencing artifacts and contaminations, taxonomic analysis, functional annotation etc. WebMGA provides users with rapid metagenomic data analysis using fast and effective tools, which have been implemented to run in parallel on our local computer cluster. Users can access WebMGA through web browsers or programming scripts to perform individual analysis or to configure and run customized pipelines. WebMGA is freely available at http://weizhongli-lab.org/metagenomic-analysis. Conclusions WebMGA offers to researchers many fast and unique tools and great flexibility for complex metagenomic data analysis.

  19. FRAM (FRontiers in Arctic marine Monitoring: The FRAM Ocean Observing System) planned efforts for integrated water column biogeochemistry

    Nielsdóttir, Maria; Salter, Ian; Kanzow, Torsten; Boetius, Antje

    2015-04-01

    The Arctic is a region undergoing rapid environmental change and will be subject to multiple stressors in the coming decades. Reductions in sea ice concentration; warming, increased terrigenous inputs and Atlantification are all expected to exert a significant impact on the structure and function of Arctic ecosystems. The Fram Strait is a particularly important region because it acts as a gateway in the exchange of Atlantic and Arctic water masses. The logistical constraints in conducting year round biogeochemical measurements in such areas impose a significant limitation to our understanding of these complicated ecosystems. To address these important challenges the German ministry of research has funded a multi-million Euro infrastructure project (FRAM). Over the next five years FRAM will develop a remote access and autonomous sampling infrastructure to improve the temporal and spatial resolution of biogeochemical measurements in the Fram Strait and central Arctic. Here we present a summary of sampling strategies, technological innovations and biogeochemical parameters that will be addressed over the duration of the project. Specific emphasis will be placed on platforms for monitoring nutrient dynamics, carbonate chemistry, organic carbon flux and the development of a sustained microbial observatory.

  20. MGA trajectory planning with an ACO-inspired algorithm

    Ceriotti, Matteo; Vasile, Massimiliano

    2010-11-01

    Given a set of celestial bodies, the problem of finding an optimal sequence of swing-bys, deep space manoeuvres (DSM) and transfer arcs connecting the elements of the set is combinatorial in nature. The number of possible paths grows exponentially with the number of celestial bodies. Therefore, the design of an optimal multiple gravity assist (MGA) trajectory is a NP-hard mixed combinatorial-continuous problem. Its automated solution would greatly improve the design of future space missions, allowing the assessment of a large number of alternative mission options in a short time. This work proposes to formulate the complete automated design of a multiple gravity assist trajectory as an autonomous planning and scheduling problem. The resulting scheduled plan will provide the optimal planetary sequence and a good estimation of the set of associated optimal trajectories. The trajectory model consists of a sequence of celestial bodies connected by two-dimensional transfer arcs containing one DSM. For each transfer arc, the position of the planet and the spacecraft, at the time of arrival, are matched by varying the pericentre of the preceding swing-by, or the magnitude of the launch excess velocity, for the first arc. For each departure date, this model generates a full tree of possible transfers from the departure to the destination planet. Each leaf of the tree represents a planetary encounter and a possible way to reach that planet. An algorithm inspired by ant colony optimization (ACO) is devised to explore the space of possible plans. The ants explore the tree from departure to destination adding one node at the time: every time an ant is at a node, a probability function is used to select a feasible direction. This approach to automatic trajectory planning is applied to the design of optimal transfers to Saturn and among the Galilean moons of Jupiter. Solutions are compared to those found through more traditional genetic-algorithm techniques.

  1. TRISTEN/FRAM II Cruise Report, East Arctic, April 1980.

    1981-04-13

    is not readily accessible by air from Alaska. The Eurasia Basin contains the Arctic Midoceanic Ridge, which extends in a straight line for 2000 km...13 6 Bottom Refraction - Shot- Lines Overlain on FRAM II Positions 14 7 Waterfall Display of Successive Spectral Estimates of Single...Northeast leg of the array was oriented 341T and the NW leg 304 ’T. After a windstorm and flow break-up on 16 April, hydrophones 11 and 12 and 21-24 were

  2. Recirculation in the Fram Strait and transports of water in and north of the Fram Strait derived from CTD data

    M. Marnela

    2013-05-01

    Full Text Available The volume, heat and freshwater transports in the Fram Strait are estimated from geostrophic computations based on summer hydrographic data from 1984, 1997, 2002 and 2004. In these years, in addition to the usually sampled section along 79° N, a section between Greenland and Svalbard was sampled further north. Quasi-closed boxes bounded by the two sections and Greenland and Svalbard can then be formed. Applying conservation constraints on these boxes provides barotropic reference velocities. The net volume flux is southward and varies between 2 and 4 Sv. The recirculation of Atlantic water is about 2 Sv. Heat is lost to the atmosphere and the heat loss from the area between the sections averaged over the four years is about 10 TW. The net heat (temperature transport is 20 TW northward into the Arctic Ocean, with large interannual differences. The mean net freshwater added between the sections is 40 mSv and the mean freshwater transport southward across 79° N is less than 60 mSv, indicating that most of the liquid freshwater leaving the Arctic Ocean through Fram Strait in summer is derived from sea ice melt in the northern vicinity of the strait. In 1997, 2001 and 2003 meridional sections along 0° longitude were sampled and in 2003 two smaller boxes can be formed, and the recirculation of Atlantic water in the strait is estimated by geostrophic computations and continuity constraints. The recirculation is weaker close to 80° N than close to 78° N, indicating that the recirculation is mainly confined to the south of 80° N. This is supported by the observations in 1997 and 2001, when only the northern part of the meridional section, from 79° N to 80° N, can be computed with the constraints applied. The recirculation is found strongest close to 79° N.

  3. Evaluation of Data Retention Characteristics for Ferroelectric Random Access Memories (FRAMs)

    Sharma, Ashok K.; Teverovsky, Alexander

    2001-01-01

    Data retention and fatigue characteristics of 64 Kb lead zirconate titanate (PZT)-based Ferroelectric Random Access Memories (FRAMs) microcircuits manufactured by Ramtron were examined over temperature range from -85 C to +310 C for ceramic packaged parts and from -85 C to +175 C for plastic parts, during retention periods up to several thousand hours. Intrinsic failures, which were caused by a thermal degradation of the ferroelectric cells, occurred in ceramic parts after tens or hundreds hours of aging at temperatures above 200 C. The activation energy of the retention test failures was 1.05 eV and the extrapolated mean-time-to-failure (MTTF) at room temperature was estimated to be more than 280 years. Multiple write-read cycling (up to 3x10(exp 7)) during the fatigue testing of plastic and ceramic parts did not result in any parametric or functional failures. However, operational currents linearly decreased with the logarithm of number of cycles thus indicating fatigue process in PZT films. Plastic parts, that had more recent date code as compared to ceramic parts, appeared to be using die with improved process technology and showed significantly smaller changes in operational currents and data access times.

  4. Mga Lente sa Likod ng Lente: Isang Panimulang Pag-aaral ng Ilang Litratong Kuha ni Xander Angeles

    Moreal Nagarit Camba

    2011-12-01

    Full Text Available Ang pangalang Xander Angeles ay kilala bilang isa sa mga in-demand napangalan sa mundo ng fashion at advertisement sa Pilipinas. Kakabit ngpangalang ito ang samut-saring lokal at internasyunal na parangal; idagdag pa rito ang isang advertising company, isang modelling agency, at isang fashion and photography school.

  5. Conceptual compression discussion on a multi-linear (FTA) and systematic (FRAM) method in an offshore operation's accident modeling.

    Toroody, Ahmad Bahoo; Abaei, Mohammad Mahdy; Gholamnia, Reza

    2016-12-01

    Risk assessment can be classified into two broad categories: traditional and modern. This paper is aimed at contrasting the functional resonance analysis method (FRAM) as a modern approach with the fault tree analysis (FTA) as a traditional method, regarding assessing the risks of a complex system. Applied methodology by which the risk assessment is carried out, is presented in each approach. Also, FRAM network is executed with regard to nonlinear interaction of human and organizational levels to assess the safety of technological systems. The methodology is implemented for lifting structures deep offshore. The main finding of this paper is that the combined application of FTA and FRAM during risk assessment, could provide complementary perspectives and may contribute to a more comprehensive understanding of an incident. Finally, it is shown that coupling a FRAM network with a suitable quantitative method will result in a plausible outcome for a predefined accident scenario.

  6. Contrasting optical properties of surface waters across the Fram Strait and its potential biological implications

    Pavlov, Alexey K.; Granskog, Mats A.; Stedmon, Colin A.

    2015-01-01

    radiation (PAR, 400-700nm), but does result in notable differences in ultraviolet (UV) light penetration, with higher attenuation in the EGC. Future changes in the Arctic Ocean system will likely affect EGC through diminishing sea-ice cover and potentially increasing CDOM export due to increase in river......Underwater light regime is controlled by distribution and optical properties of colored dissolved organic matter (CDOM) and particulate matter. The Fram Strait is a region where two contrasting water masses are found. Polar water in the East Greenland Current (EGC) and Atlantic water in the West...... Spitsbergen Current (WSC) differ with regards to temperature, salinity and optical properties. We present data on absorption properties of CDOM and particles across the Fram Strait (along 79° N), comparing Polar and Atlantic surface waters in September 2009 and 2010. CDOM absorption of Polar water in the EGC...

  7. Evaluation of Data Retention and Imprint Characteristics of FRAMs Under Environmental Stresses for NASA Applications

    Sharma, Asbok K.; Teverovsky, Alexander; Dowdy, Terry W.; Hamilton, Brett

    2002-01-01

    A major reliability issue for all advanced nonvolatile memory (NVM) technology devices including FRAMs is the data retention characteristics over extended period of time, under environmental stresses and exposure to total ionizing dose (TID) radiation effects. For this testing, 256 Kb FRAMs in 28-pin plastic DIPS, rated for industrial grade temperature range of -40 C to +85 C, were procured. These are two-transistor, two-capacitor (2T-2C) design FRAMs. In addition to data retention characteristics, the parts were also evaluated for imprint failures, which are defined as the failure of cells to change from a "preferred" state, where it has been for a significant period of time to an opposite state (e.g., from 1 to 0, or 0 to 1). These 256 K FRAMs were subjected to scanning acoustic microscopy (C-SAM); 1,000 temperature cycles from -65 C to +150 C; high temperature aging at 150 C, 175 C, and 200 C for 1,000 hours; highly accelerated stress test (HAST) for 500 hours; 1,000 hours of operational life test at 125 C; and total ionizing dose radiation testing. As a preconditioning, 10 K read/write cycles were performed on all devices. Interim electrical measurements were performed throughout this characterization, including special imprint testing and final electrical testing. Some failures were observed during high temperature aging test at 200 C, during HAST testing, and during 1,000 hours of operational life at 125 C. The parts passed 10 Krad exposure, but began showing power supply current increases during the dose increment from 10 Krad to 30 Krad, and at 40 Krad severe data retention and parametric failures were observed. Failures from various environmental group testing are currently being analyzed.

  8. Delivery and installation of PC/FRAM at the PNC Tokai Works

    Sampson, T.E.; Kelley, T.A.; Kroncke, K.E.; Menlove, H.O.; Baca, J.; Asano, Takashi; Terakado, Shigeru; Goto, Yasushi; Kogawa, Noboru

    1997-11-01

    The authors report on the assembly, testing, delivery, installation, and initial testing of three PC/FRAM plutonium isotopic analysis systems at the Power Reactor and Nuclear Fuel Development Corporation's Tokai Works. These systems are intended to measure the isotopic composition and 235 U/plutonium of mixed oxide (MOX) waste in 200-L waste drums. These systems provide capability for performing measurements on lead-lined drums

  9. Findings from the 2012 EBRI/MGA Consumer Engagement in Health Care Survey.

    Fronstin, Paul

    2012-12-01

    The 2012 EBRI/MGA Consumer Engagement in Health Care Survey finds continued slow growth in consumer-driven health plans: 10 percent of the population was enrolled in a CDHP, up from 7 percent in 2011. Enrollment in HDHPs remained at 16 percent. Overall, 18.6 million adults ages 21-64 with private insurance, representing 15.4 percent of that market, were either in a CDHP or were in an HDHP that was eligible for an HSA. When their children were counted, about 25 million individuals with private insurance, representing about 14.6 percent of the market, were either in a CDHP or an HSA-eligible plan. This study finds evidence that adults in a CDHP and those in an HDHP were more likely than those in a traditional plan to exhibit a number of cost-conscious behaviors. While CDHP enrollees, HDHP enrollees, and traditional-plan enrollees were about equally likely to report that they made use of quality information provided by their health plan, CDHP enrollees were more likely to use cost information and to try to find information about their doctors' costs and quality from sources other than the health plan. CDHP enrollees were more likely than traditional-plan enrollees to take advantage of various wellness programs, such as health-risk assessments, health-promotion programs, and biometric screenings. In addition, financial incentives mattered more to CDHP enrollees than to traditional-plan enrollees. It is clear that the underlying characteristics of the populations enrolled in these plans are different: Adults in a CDHP were significantly more likely to report being in excellent or very good health. Adults in a CDHP and those in a HDHP were significantly less likely to smoke than were adults in a traditional plan, and they were significantly more likely to exercise. CDHP and HDHP enrollees were also more likely than traditional-plan enrollees to be highly educated. As the CDHP and HDHP markets continue to expand and more enrollees are enrolled for longer periods of time

  10. Impact of recirculation on the East Greenland Current in Fram Strait: Results from moored current meter measurements between 1997 and 2009

    de Steur, L.; Hansen, E.; Mauritzen, C.; Beszczynska-Möller, A.; Fahrbach, E.

    2014-01-01

    Transports of total volume and water masses obtained from a mooring array in the East Greenland Current (EGC) in Fram Strait are presented for the period 1997–2009. The array in the EGC was moved along isobaths from 79°N to 78°50'N78°50'N in 2002 to line up with moorings in the eastern Fram Strait.

  11. FRAM-2012: Norwegians return to the High Arctic with a Hovercraft for Marine Geophysical Research

    Hall, J. K.; Kristoffersen, Y.; Brekke, H.; Hope, G.

    2012-12-01

    After four years of testing methods, craft reliability, and innovative equipment, the R/H SABVABAA has embarked on its first FRAM-201x expedition to the highest Arctic. Named after the Inupiaq word for 'flows swiftly over it', the 12m by 6m hovercraft has been home-based in Longyearbyen, Svalbard since June 2008. In this, its fifth summer of work on the ice pack north of 81N, the craft is supported by the Norwegian Petroleum Directorate (NPD) via the Nansen Environmental and Remote Sensing Center (NERSC) in Bergen, and the Norwegian Scientific Academy for Polar Research. FRAM-2012 represents renewed Norwegian interest in returning to the highest Arctic some 116 years after the 1893-96 drift of Fridtjof Nansen's ship FRAM, the first serious scientific investigation of the Arctic. When replenished by air or icebreaker, the hovercraft Sabvabaa offers a hospitable scientific platform with crew of two, capable of marine geophysical, geological and oceanographic observations over long periods with relative mobility on the ice pack. FRAM-2012 is the first step towards this goal, accompanying the Swedish icebreaker ODEN to the Lomonosov Ridge, north of Greenland, as part of the LOMROG III expedition. The science plan called for an initial drive from the ice edge to Gakkel Ridge at 85N where micro-earthquakes would be monitored, and then to continue north to a geological sampling area on the Lomonosov Ridge at about 88N, 65W. The micro-earthquake monitoring is part of Gaute Hope's MSc thesis and entails five hydrophones in a WiFi-connected hydrophone array deployed over the Gakkel Rift Valley, drifting with the ice at up to 0.4 knots. On August 3 the hovercraft was refueled from icebreaker ODEN at 84-21'N and both vessels proceeded north. The progress of the hovercraft was hampered by insufficient visibility for safe driving and time consuming maneuvering in and around larger fields of rubble ice impassable by the hovercraft, but of little concern to the icebreaker. It

  12. Impacts of Changed Extratropical Storm Tracks on Arctic Sea Ice Export through Fram Strait

    Wei, J.; Zhang, X.; Wang, Z.

    2017-12-01

    Studies have indicated a poleward shift of extratropical storm tracks and intensification of Arctic storm activities, in particular on the North Atlantic side of the Arctic Ocean. To improve understanding of dynamic effect on changes in Arctic sea ice mass balance, we examined the impacts of the changed storm tracks and activities on Arctic sea ice export through Fram Strait through ocean-sea ice model simulations. The model employed is the high-resolution Massachusetts Institute of Technology general circulation model (MITgcm), which was forced by the Japanese 25-year Reanalysis (JRA-25) dataset. The results show that storm-induced strong northerly wind stress can cause simultaneous response of daily sea ice export and, in turn, exert cumulative effects on interannual variability and long-term changes of sea ice export. Further analysis indicates that storm impact on sea ice export is spatially dependent. The storms occurring southeast of Fram Strait exhibit the largest impacts. The weakened intensity of winter storms in this region after 1994/95 could be responsible for the decrease of total winter sea ice export during the same time period.

  13. Atlantic water heat transfer through the Arctic Gateway (Fram Strait) during the Last Interglacial

    Zhuravleva, Anastasia; Bauch, Henning A.; Spielhagen, Robert F.

    2017-10-01

    The Last Interglacial in the Arctic region is often described as a time with warmer conditions and significantly less summer sea ice than today. The role of Atlantic water (AW) as the main oceanic heat flux agent into the Arctic Ocean remains, however, unclear. Using high-resolution stable isotope and faunal records from the only deep Arctic Gateway, the Fram Strait, we note for the upper water column a diminished influence of AW and generally colder-than-Holocene surface ocean conditions. After the main Saalian deglaciation had terminated, a first intensification of northward-advected AW happened ( 124 ka). However, an intermittent sea surface cooling, triggered by meltwater release at 122 ka, caused a regional delay in the further development towards peak interglacial conditions. Maximum AW heat advection occurred during late MIS 5e (118.5-116 ka) and interrupted a longer-term cooling trend at the sea surface that started from about 120 ka on. Such a late occurrence of the major AW-derived near-surface warming in the Fram Strait - this is in stark contrast to an early warm peak in the Holocene - compares well in time with upstream records from the Norwegian Sea, altogether implying a coherent development of south-to-north ocean heat transfer through the eastern Nordic Seas and into the high Arctic during the Last Interglacial.

  14. Impacts of extratropical storm tracks on Arctic sea ice export through Fram Strait

    Wei, Jianfen; Zhang, Xiangdong; Wang, Zhaomin

    2018-05-01

    Studies have indicated regime shifts in atmospheric circulation, and associated changes in extratropical storm tracks and Arctic storm activity, in particular on the North Atlantic side of the Arctic Ocean. To improve understanding of changes in Arctic sea ice mass balance, we examined the impacts of the changed storm tracks and cyclone activity on Arctic sea ice export through Fram Strait by using a high resolution global ocean-sea ice model, MITgcm-ECCO2. The model was forced by the Japanese 25-year Reanalysis (JRA-25) dataset. The results show that storm-induced strong northerly wind stress can cause simultaneous response of daily sea ice export and, in turn, exert cumulative effects on interannual variability and long-term changes of sea ice export. Further analysis indicates that storm impact on sea ice export is spatially dependent. The storms occurring southeast of Fram Strait exhibit the largest impacts. The weakened intensity of winter (in this study winter is defined as October-March and summer as April-September) storms in this region after 1994/95 could be responsible for the decrease of total winter sea ice export during the same time period.

  15. Findings from the 2009 EBRI/MGA Consumer Engagement in Health Care Survey.

    Fronstin, Paul

    2009-12-01

    FIFTH ANNUAL SURVEY: This Issue Brief presents findings from the 2009 EBRI/MGA Consumer Engagement in Health Care Survey, which provides nationally representative data regarding the growth of consumer-driven health plans (CDHPs) and high-deductible health plans (HDHPs), and the impact of these plans and consumer engagement more generally on the behavior and attitudes of adults with private health insurance coverage. Findings from this survey are compared with four earlier annual surveys. ENROLLMENT LOW BUT GROWING: In 2009, 4 percent of the population was enrolled in a CDHP, up from 3 percent in 2008. Enrollment in HDHPs increased from 11 percent in 2008 to 13 percent in 2009. The 4 percent of the population with a CDHP represents 5 million adults ages 21-64 with private insurance, while the 13 percent with a HDHP represents 16.2 million people. Among the 16.2 million individuals with an HDHP, 38 percent (or 6.2 million) reported that they were eligible for a health savings account (HSA) but did not have such an account. Overall, 11.2 million adults ages 21-64 with private insurance, representing 8.9 percent of that market, were either in a CDHP or were in an HDHP that was eligible for an HSA, but had not opened the account. MORE COST-CONSCIOUS BEHAVIOR: Individuals in CDHPs were more likely than those with traditional coverage to exhibit a number of cost-conscious behaviors. They were more likely to say that they had checked whether the plan would cover care; asked for a generic drug instead of a brand name; talked to their doctor about prescription drug options, other treatments, and costs; asked their doctor to recommend a less costly prescription drug; developed a budget to manage health care expenses; checked prices before getting care; and used an online cost-tracking tool. CDHP MORE ENGAGED IN WELLNESS PROGRAMS: CDHP enrollees were more likely than traditional plan enrollees to report that they had the opportunity to fill out a health risk assessment

  16. Findings from the 2011 EBRI/MGA Consumer Engagement in Health Care Survey.

    Fronstin, Paul

    2011-12-01

    SEVENTH ANNUAL SURVEY: This Issue Brief presents findings from the 2011 EBRI/MGA Consumer Engagement in Health Care Survey. This study is based on an online survey of 4,703 privately insured adults ages 21-64 to provide nationally representative data regarding the growth of consumer-driven health plans (CDHPs) and high-deductible health plans (HDHPs), and the impact of these plans and consumer engagement more generally on the behavior and attitudes of adults with private health insurance coverage. Findings from this survey are compared with EBRI's findings from earlier surveys. ENROLLMENT CONTINUES TO GROW: The survey finds continued growth in consumer-driven health plans: In 2011, 7 percent of the population was enrolled in a CDHP, up from 5 percent in 2010. Enrollment in HDHPs increased from 14 percent in 2010 to 16 percent in 2011. The 7 percent of the population with a CDHP represents 8.4 million adults ages 21-64 with private insurance, while the 16 percent with a HDHP represents 19.3 million people. Among the 19.3 million individuals with an HDHP, 38 percent (or 7.3 million) reported that they were eligible for a health savings ccount (HSA) but did not have such an account. Overall, 15.8 million adults ages 21-64 with private insurance, representing 13.1 percent of that market, were either in a CDHP or were in an HDHP that was eligible for an HSA but had not opened the account. When their children are counted, about 21 million individuals with private insurance, representing about 12 percent of the market, were either in a CDHP or an HSA-eligible plan. MORE COST-CONSCIOUS BEHAVIOR: Individuals in CDHPs were more likely than those with traditional coverage to exhibit a number of cost-conscious behaviors. They were more likely to say that they had checked whether their plan would cover care; asked for a generic drug instead of a brand name; talked to their doctor about treatment options and costs; talked to their doctor about prescription drug options and costs

  17. Splitting of Atlantic water transport towards the Arctic Ocean into the Fram Strait and Barents Sea Branches - mechanisms and consequences

    Beszczynska-Möller, Agnieszka; Skagseth, Øystein; von Appen, Wilken-Jon; Walczowski, Waldemar; Lien, Vidar

    2016-04-01

    The heat content in the Arctic Ocean is to a large extent determined by oceanic advection from the south. During the last two decades the extraordinary warm Atlantic water (AW) inflow has been reported to progress through the Nordic Seas into the Arctic Ocean. Warm anomalies can result from higher air temperatures (smaller heat loss) in the Nordic Seas, and/or from an increased oceanic advection. But the ultimate fate of warm anomalies of Atlantic origin depends strongly on their two possible pathways towards the Arctic Ocean. The AW temperature changes from 7-10°C at the entrance to the Nordic Seas, to 6-6.5°C in the Barents Sea opening and 3-3.5°C as the AW leaving Fram Strait enters the Arctic Ocean. When AW passes through the shallow Barents Sea, nearly all its heat is lost due to atmospheric cooling and AW looses its signature. In the deep Fram Strait the upper part of Atlantic water becomes transformed into a less saline and colder surface layer and thus AW preserves its warm core. A significant warming and high variability of AW volume transport was observed in two recent decades in the West Spitsbergen Current, representing the Fram Strait Branch of Atlantic inflow. The AW inflow through Fram Strait carries between 26 and 50 TW of heat into the Arctic Ocean. While the oceanic heat influx to the Barents Sea is of a similar order, the heat leaving it through the northern exit into the Arctic Ocean is negligible. The relative strength of two Atlantic water branches through Fram Strait and the Barents Sea governs the oceanic heat transport into the Arctic Ocean. According to recently proposed mechanism, the Atlantic water flow in the Barents Sea Branch is controlled by the strength of atmospheric low over the northern Barents Sea, acting through a wind-induced Ekman divergence, which intensifies eastward AW flow. The Atlantic water transport in the Fram Strait Branch is mainly forced by the large-scale low-pressure system over the eastern Norwegian and

  18. An Application of the Functional Resonance Analysis Method (FRAM) to Risk Assessment of Organisational Change

    Hollnagel, Erik [MINES ParisTech Crisis and Risk Research Centre (CRC), Sophia Antipolis Cedex (France)

    2012-11-15

    The objective of this study was to demonstrate an alternative approach to risk assessment of organisational changes, based on the principles of resilience engineering. The approach in question was the Functional Resonance Analysis Method (FRAM). Whereas established approaches focus on risks coming from failure or malfunctioning of components, alone or in combination, resilience engineering focuses on the common functions and processes that provide the basis for both successes and failures. Resilience engineering more precisely proposes that failures represent the flip side of the adaptations necessary to cope with the real world complexity rather than a failure of normal system functions and that a safety assessment therefore should focus on how functions are carried out rather than on how they may fail. The objective of this study was not to evaluate the current approach to risk assessment used by the organisation in question. The current approach has nevertheless been used as a frame of reference, but in a non-evaluative manner. The author has demonstrated through the selected case that FRAM can be used as an alternative approach to organizational changes. The report provides the reader with details to consider when making a decision on what analysis approach to use. The choice of which approach to use must reflect priorities and concerns of the organisation and the author makes no statement about which approach is better. It is clear that the choice of an analysis approach is not so simple to make and there are many things to take into account such as the larger working environment, organisational culture, regulatory requirements, etc.

  19. An Application of the Functional Resonance Analysis Method (FRAM) to Risk Assessment of Organisational Change

    Hollnagel, Erik

    2012-11-01

    The objective of this study was to demonstrate an alternative approach to risk assessment of organisational changes, based on the principles of resilience engineering. The approach in question was the Functional Resonance Analysis Method (FRAM). Whereas established approaches focus on risks coming from failure or malfunctioning of components, alone or in combination, resilience engineering focuses on the common functions and processes that provide the basis for both successes and failures. Resilience engineering more precisely proposes that failures represent the flip side of the adaptations necessary to cope with the real world complexity rather than a failure of normal system functions and that a safety assessment therefore should focus on how functions are carried out rather than on how they may fail. The objective of this study was not to evaluate the current approach to risk assessment used by the organisation in question. The current approach has nevertheless been used as a frame of reference, but in a non-evaluative manner. The author has demonstrated through the selected case that FRAM can be used as an alternative approach to organizational changes. The report provides the reader with details to consider when making a decision on what analysis approach to use. The choice of which approach to use must reflect priorities and concerns of the organisation and the author makes no statement about which approach is better. It is clear that the choice of an analysis approach is not so simple to make and there are many things to take into account such as the larger working environment, organisational culture, regulatory requirements, etc

  20. Results from On-Orbit Testing of the Fram Memory Test Experiment on the Fastsat Micro-Satellite

    MacLeod, Todd C.; Sims, W. Herb; Varnavas, Kosta A.; Ho, Fat D.

    2011-01-01

    NASA is planning on going beyond Low Earth orbit with manned exploration missions. The radiation environment for most Low Earth orbit missions is harsher than at the Earth's surface but much less harsh than deep space. Development of new electronics is needed to meet the requirements of high performance, radiation tolerance, and reliability. The need for both Volatile and Non-volatile memory has been identified. Emerging Non-volatile memory technologies (FRAM, C-RAM,M-RAM, R-RAM, Radiation Tolerant FLASH, SONOS, etc.) need to be investigated for use in Space missions. An opportunity arose to fly a small memory experiment on a high inclination satellite (FASTSAT). An off-the-shelf 512K Ramtron FRAM was chosen to be tested in the experiment.

  1. Introduction to the use of FRAM on the effectiveness assessment of a radiopharmaceutical dispatches process

    Pereira, Ana G.A.A., E-mail: agaap@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    This article aims to make an introduction to the use of Functional Resonance Analysis Method (FRAM) on the effectiveness assessment of a specific radiopharmaceutical dispatching process. The main purpose was to provide a didactic view of the method application to further in-depth analysis. The investigation also provided a relevant body of knowledge of radiopharmaceuticals dispatches processes. This work uses the term 'effectiveness assessment' instead of 'risk assessment' due to the broader meaning the former provide. The radiopharmaceutical dispatching process is the final task of a dynamic system designed to attend several medical facilities. It is comprised by functions involving mostly human activities, such as checking and packaging the product and measuring the radiopharmaceutical nuclear activity. Although the dispatch process has well-known steps for its completion, the human factor is the fundamental mechanism of work and control, being susceptible of irregular and instable performance. As a socio-technical system, the risk assessment provided by FRAM may be of importance for safety and quality improvements, even more if considered the nuclear nature of the product, which makes risk assessment critical and mandatory. A system is safe if it is resistant and resilient to perturbations. Identification and assessment of possible risks is, therefore, an essential prerequisite for system safety. Although this seems obvious, most risk assessments are conducted under relative ignorance of the full behavior of the system. Such condition has lead to an approach to assess the risks of intractable systems (i.e., systems that are incompletely described or under specified), namely Resilience Engineering. Into this area, the Functional Resonance Analysis Method has been developed in order to provide concepts, terminology and a set of methods capable of dealing with such systems. The study was conducted following the Functional Resonance Analysis

  2. Introduction to the use of FRAM on the effectiveness assessment of a radiopharmaceutical dispatches process

    Pereira, Ana G.A.A.

    2013-01-01

    This article aims to make an introduction to the use of Functional Resonance Analysis Method (FRAM) on the effectiveness assessment of a specific radiopharmaceutical dispatching process. The main purpose was to provide a didactic view of the method application to further in-depth analysis. The investigation also provided a relevant body of knowledge of radiopharmaceuticals dispatches processes. This work uses the term 'effectiveness assessment' instead of 'risk assessment' due to the broader meaning the former provide. The radiopharmaceutical dispatching process is the final task of a dynamic system designed to attend several medical facilities. It is comprised by functions involving mostly human activities, such as checking and packaging the product and measuring the radiopharmaceutical nuclear activity. Although the dispatch process has well-known steps for its completion, the human factor is the fundamental mechanism of work and control, being susceptible of irregular and instable performance. As a socio-technical system, the risk assessment provided by FRAM may be of importance for safety and quality improvements, even more if considered the nuclear nature of the product, which makes risk assessment critical and mandatory. A system is safe if it is resistant and resilient to perturbations. Identification and assessment of possible risks is, therefore, an essential prerequisite for system safety. Although this seems obvious, most risk assessments are conducted under relative ignorance of the full behavior of the system. Such condition has lead to an approach to assess the risks of intractable systems (i.e., systems that are incompletely described or under specified), namely Resilience Engineering. Into this area, the Functional Resonance Analysis Method has been developed in order to provide concepts, terminology and a set of methods capable of dealing with such systems. The study was conducted following the Functional Resonance Analysis Method. At first, the

  3. Water mass distribution in Fram Strait and over the Yermak Plateau in summer 1997

    B. Rudels

    Full Text Available The water mass distribution in northern Fram Strait and over the Yermak Plateau in summer 1997 is described using CTD data from two cruises in the area. The West Spitsbergen Current was found to split, one part recirculated towards the west, while the other part, on entering the Arctic Ocean separated into two branches. The main inflow of Atlantic Water followed the Svalbard continental slope eastward, while a second, narrower, branch stayed west and north of the Yermak Plateau. The water column above the southeastern flank of the Yermak Plateau was distinctly colder and less saline than the two inflow branches. Immediately west of the outer inflow branch comparatively high temperatures in the Atlantic Layer suggested that a part of the extraordinarily warm Atlantic Water, observed in the boundary current in the Eurasian Basin in the early 1990s, was now returning, within the Eurasian Basin, toward Fram Strait. The upper layer west of the Yermak Plateau was cold, deep and comparably saline, similar to what has recently been observed in the interior Eurasian Basin. Closer to the Greenland continental slope the salinity of the upper layer became much lower, and the temperature maximum of the Atlantic Layer was occasionally below 
    0.5 °C, indicating water masses mainly derived from the Canadian Basin. This implies that the warm pulse of Atlantic Water had not yet made a complete circuit around the Arctic Ocean. The Atlantic Water of the West Spitsbergen Current recirculating within the strait did not extend as far towards Greenland as in the 1980s, leaving a broader passage for waters from the Atlantic and intermediate layers, exiting the Arctic Ocean. A possible interpretation is that the circulation pattern alternates between a strong recirculation of the West Spitsbergen Current in the strait, and a larger exchange of Atlantic Water between the Nordic Seas and the inner parts of the Arctic Ocean.

    Key words: Oceanography: general

  4. Biogeographic patterns of bacterial microdiversity in Arctic deep-sea sediments (HAUSGARTEN, Fram Strait).

    Buttigieg, Pier Luigi; Ramette, Alban

    2014-01-01

    Marine bacteria colonizing deep-sea sediments beneath the Arctic ocean, a rapidly changing ecosystem, have been shown to exhibit significant biogeographic patterns along transects spanning tens of kilometers and across water depths of several thousand meters (Jacob et al., 2013). Jacob et al. (2013) adopted what has become a classical view of microbial diversity - based on operational taxonomic units clustered at the 97% sequence identity level of the 16S rRNA gene - and observed a very large microbial community replacement at the HAUSGARTEN Long Term Ecological Research station (Eastern Fram Strait). Here, we revisited these data using the oligotyping approach and aimed to obtain new insight into ecological and biogeographic patterns associated with bacterial microdiversity in marine sediments. We also assessed the level of concordance of these insights with previously obtained results. Variation in oligotype dispersal range, relative abundance, co-occurrence, and taxonomic identity were related to environmental parameters such as water depth, biomass, and sedimentary pigment concentration. This study assesses ecological implications of the new microdiversity-based technique using a well-characterized dataset of high relevance for global change biology.

  5. Biogeographic patterns of bacterial microdiversity in Arctic deep-sea sediments (Hausgarten, Fram Strait

    Pier Luigi eButtigieg

    2015-01-01

    Full Text Available Marine bacteria colonising deep-sea sediments beneath the Arctic ocean, a rapidly changing ecosystem, have been shown to exhibit significant biogeographic patterns along transects spanning tens of kilometres and across water depths reaching several thousands of metres (Jacob et al., 2013. Jacob et al. adopted what has become a classical view of microbial diversity based on operational taxonomic units clustered at the 97% sequence identity level of the 16S rRNA gene and observed a very large microbial community replacement at the Hausgarten Long-Term Ecological Research station (Eastern Fram Strait. Here, we revisited these data using the oligotyping approach with the aims of obtaining new insights into ecological and biogeographic patterns associated with bacterial microdiversity in marine sediments and of assessing the level of concordance of these insights with previously obtained results. Variation in oligotype dispersal range, relative abundance, co-occurrence, and taxonomic identity were related to environmental parameters such as water depth, biomass, and sedimentary pigment concentration. This study assesses ecological implications of the new microdiversity-based technique using a well-characterised dataset of high relevance for global change biology.

  6. ­­­­Submarine Mass Wasting on Hovgaard Ridge, Fram Strait, European Arctic

    Forwick, M.; Laberg, J. S.; Husum, K.; Gales, J. A.

    2015-12-01

    Hovgaard Ridge is an 1800 m high bathymetric high in the Fram Strait, the only deep-water gateway between the Arctic Ocean and the other World's oceans. The slopes of the ridge provide evidence of various types of sediment reworking, including 1) up to 12 km wide single and merged slide scars with maximum ~30 m high headwalls and some secondary escarpments; 2) maximum 3 km wide and 130 m deep slide scars with irregular internal morphology, partly narrowing towards the foot of the slope; 3) up to 130 m deep, 1.5 km wide and maximum 8 km long channels/gullies originating from areas of increasing slope angle at the margins of a plateau on top of the ridge. Most slide scars result presumably from retrogressive failure related to weak layers in contourites or ash. The most likely trigger mechanism is seismicity related to tectonic activity within the nearby mid-ocean fracture zone. Gully/channel formation is suggested to result from cascading water masses and/or from sediment gravity flows originating from failure at the slope break after winnowing on the plateau of the ridge.

  7. Biogeography of Deep-sea benthic bacteria at regional scale (LTER HAUSGARTEN, Fram Strait, Arctic.

    Marianne Jacob

    Full Text Available Knowledge on spatial scales of the distribution of deep-sea life is still sparse, but highly relevant to the understanding of dispersal, habitat ranges and ecological processes. We examined regional spatial distribution patterns of the benthic bacterial community and covarying environmental parameters such as water depth, biomass and energy availability at the Arctic Long-Term Ecological Research (LTER site HAUSGARTEN (Eastern Fram Strait. Samples from 13 stations were retrieved from a bathymetric (1,284-3,535 m water depth, 54 km in length and a latitudinal transect (∼ 2,500 m water depth; 123 km in length. 454 massively parallel tag sequencing (MPTS and automated ribosomal intergenic spacer analysis (ARISA were combined to describe both abundant and rare types shaping the bacterial community. This spatial sampling scheme allowed detection of up to 99% of the estimated richness on phylum and class levels. At the resolution of operational taxonomic units (97% sequence identity; OTU3% only 36% of the Chao1 estimated richness was recovered, indicating a high diversity, mostly due to rare types (62% of all OTU3%. Accordingly, a high turnover of the bacterial community was also observed between any two sampling stations (average replacement of 79% of OTU3%, yet no direct correlation with spatial distance was observed within the region. Bacterial community composition and structure differed significantly with increasing water depth along the bathymetric transect. The relative sequence abundance of Verrucomicrobia and Planctomycetes decreased significantly with water depth, and that of Deferribacteres increased. Energy availability, estimated from phytodetrital pigment concentrations in the sediments, partly explained the variation in community structure. Overall, this study indicates a high proportion of unique bacterial types on relatively small spatial scales (tens of kilometers, and supports the sampling design of the LTER site HAUSGARTEN to

  8. Innovations in the Assay of Un-Segregated Multi-Isotopic Grade TRU Waste Boxes with SuperHENC and FRAM Technology

    Simpson, A. P.; Barber, S.; Abdurrahman, N. M.

    2006-01-01

    The Super High Efficiency Neutron Coincidence Counter (SuperHENC) was originally developed by BIL Solutions Inc., Los Alamos National Laboratory (LANL) and Rocky Flats Environmental Technology Site (RFETS) for assay of transuranic (TRU) waste in Standard Waste Boxes (SWB) at Rocky Flats. This mobile system was a key component in the shipment of over 4,000 SWBs to the Waste Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico. The system was WIPP certified in 2001 and operated at the site for four years. The success of this system, a passive neutron coincidence counter combined with high resolution gamma spectroscopy, led to the order of two new units, delivered to Hanford in 2004. Several new challenges were faced at Hanford: For example, the original RFETS system was calibrated for segregated waste streams such that metals, plastics, wet combustibles and dry combustibles were separated by 'Item Description Codes' prior to assay. Furthermore, the RFETS mission of handling only weapons grade plutonium, enabled the original SuperHENC to benefit from the use of known Pu isotopics. Operations at Hanford, as with most other DOE sites, generate un-segregated waste streams, with a wide diversity of Pu isotopics. Consequently, the new SuperHENCs are required to deal with new technical challenges. The neutron system's software and calibration methodology have been modified to encompass these new requirements. In addition, PC-FRAM software has been added to the gamma system, providing a robust isotopic measurement capability. Finally a new software package has been developed that integrates the neutron and gamma data to provide a final assay results and analysis report. The new system's performance has been rigorously tested and validated against WIPP quality requirements. These modifications, together with the mobile platform, make the new SuperHENC far more versatile in handling diverse waste streams and allow for rapid redeployment around the DOE complex. (authors)

  9. Characteristics of Milk Fermented by Streptococcus thermophilus MGA45-4 and the Profiles of Associated Volatile Compounds during Fermentation and Storage

    Tong Dan

    2018-04-01

    Full Text Available The lactic acid bacterium Streptococcus thermophilus is a major starter culture for the production of dairy products. In this study, the physiochemical characteristics of milk fermented by the MGA45-4 isolate of S. thermophilus were analyzed. Our data indicate that milk fermented using S. thermophilus MGA45-4 maintained a high viable cell count (8.86 log10 colony-forming units/mL, and a relatively high pH (4.4, viscosity (834.33 mPa·s, and water holding capacity (40.85% during 14 days of storage. By analyzing the volatile compound profile using solid-phase microextraction and gas chromatography/mass spectrometry, we identified 73 volatile compounds in the fermented milk product, including five carboxylic acids, 21 aldehydes, 13 ketones, 16 alcohols, five esters, and 13 aromatic carbohydrates. According to the odor activity values, 11 of these volatile compounds were found to play a key role in producing the characteristic flavor of fermented milk, particularly octanal, nonanal, hexanal, 2,3-butanedione, and 1-octen-3-ol, which had the highest odor activity values among all compounds analyzed. These findings thus provide more insights in the chemical/molecular characteristics of milk fermented using S. thermophilus, which may provide a basis for improving dairy product flavor/odor during the process of fermentation and storage.

  10. Characteristics of Milk Fermented by Streptococcus thermophilus MGA45-4 and the Profiles of Associated Volatile Compounds during Fermentation and Storage.

    Dan, Tong; Jin, Rulin; Ren, Weiyi; Li, Ting; Chen, Haiyan; Sun, Tiansong

    2018-04-11

    The lactic acid bacterium Streptococcus thermophilus is a major starter culture for the production of dairy products. In this study, the physiochemical characteristics of milk fermented by the MGA45-4 isolate of S. thermophilus were analyzed. Our data indicate that milk fermented using S. thermophilus MGA45-4 maintained a high viable cell count (8.86 log10 colony-forming units/mL), and a relatively high pH (4.4), viscosity (834.33 mPa·s), and water holding capacity (40.85%) during 14 days of storage. By analyzing the volatile compound profile using solid-phase microextraction and gas chromatography/mass spectrometry, we identified 73 volatile compounds in the fermented milk product, including five carboxylic acids, 21 aldehydes, 13 ketones, 16 alcohols, five esters, and 13 aromatic carbohydrates. According to the odor activity values, 11 of these volatile compounds were found to play a key role in producing the characteristic flavor of fermented milk, particularly octanal, nonanal, hexanal, 2,3-butanedione, and 1-octen-3-ol, which had the highest odor activity values among all compounds analyzed. These findings thus provide more insights in the chemical/molecular characteristics of milk fermented using S. thermophilus , which may provide a basis for improving dairy product flavor/odor during the process of fermentation and storage.

  11. Characteristics of colored dissolved organic matter (CDOM) in the Arctic outflow in the Fram Strait: Assessing the changes and fate of terrigenous CDOM in the Arctic Ocean

    Granskog, M.A.; Stedmon, C.A.; Dodd, P.A.; Amon, R.M.W.; Pavlov, A.K.; de Steur, L.; Hansen, E.

    2012-01-01

    Absorption coefficients of colored dissolved organic matter (CDOM) were measured together with salinity, delta O-18, and inorganic nutrients across the Fram Strait. A pronounced CDOM absorption maximum between 30 and 120 m depth was associated with river and sea ice brine enriched water,

  12. FRAM - the robotic telescope for the monitoring of the wavelength dependence of the extinction: description of hardware, data analysis, and results

    Prouza, Michael; Jelínek, M.; Kubánek, P.; Ebr, Jan; Trávníček, Petr; Šmída, Radomír

    2010-01-01

    Roč. 2010, - (2010), 849382/1-849382/5 ISSN 1687-7969 R&D Projects: GA MŠk LC527; GA MŠk(CZ) LA08016 Institutional research plan: CEZ:AV0Z10100502; CEZ:AV0Z10100523 Keywords : FRAM * wavelength dependence * light extinction * cosmic ray showers Subject RIV: BF - Elementary Particles and High Energy Physics

  13. The solar and interplanetary causes of the recent minimum in geomagnetic activity (MGA23: a combination of midlatitude small coronal holes, low IMF BZ variances, low solar wind speeds and low solar magnetic fields

    B. T. Tsurutani

    2011-05-01

    Full Text Available Minima in geomagnetic activity (MGA at Earth at the ends of SC23 and SC22 have been identified. The two MGAs (called MGA23 and MGA22, respectively were present in 2009 and 1997, delayed from the sunspot number minima in 2008 and 1996 by ~1/2–1 years. Part of the solar and interplanetary causes of the MGAs were exceptionally low solar (and thus low interplanetary magnetic fields. Another important factor in MGA23 was the disappearance of equatorial and low latitude coronal holes and the appearance of midlatitude coronal holes. The location of the holes relative to the ecliptic plane led to low solar wind speeds and low IMF (Bz variances (σBz2 and normalized variances (σBz2/B02 at Earth, with concomitant reduced solar wind-magnetospheric energy coupling. One result was the lowest ap indices in the history of ap recording. The results presented here are used to comment on the possible solar and interplanetary causes of the low geomagnetic activity that occurred during the Maunder Minimum.

  14. The coccolithophores Emiliania huxleyi and Coccolithus pelagicus: Extant populations from the Norwegian-Iceland Seas and Fram Strait

    Dylmer, C. V.; Giraudeau, J.; Hanquiez, V.; Husum, K.

    2015-04-01

    The distributions of the coccolithophore species Emiliania huxleyi and Coccolithus pelagicus (heterococcolith-bearing phase) in the northern North Atlantic were investigated along two zonal transects crossing Fram Strait and the Norwegian-Iceland Sea, respectively, each conducted during both July 2011 and September-October 2007. Remote-sensing images as well as CTD and ARGO profiles were used to constrain the physico-chemical state of the surface water and surface mixed layer at the time of sampling. Strong seasonal differences in bulk coccolithophore standing stocks characterized the northern and southern transects, where the maximum values of 53×103 cells/l (fall) and 70×103 cells/l (summer), respectively, were essentially explained by E. huxleyi. This pattern confirms previous findings of a summer to fall northwestward shift in peak coccolithophore cell densities within the Nordic Seas. While depicting an overall zonal shift in high cell densities between the summer (Norwegian Sea) and fall (northern Iceland Sea) conditions, the southern transects were additionally characterized by local peak coccolithophore concentrations associated with a geographically and temporally restricted convective process (Lofoten Gyre, summer), as well as an island mass effect (in the vicinity of Jan Mayen Island, fall). Maximum coccolithophore abundances within Fram Strait were found during both seasons close to the western frontal zone (Polar and Arctic Fronts) an area of strong density gradients where physical and chemical properties of the surface mixed layer are prone to enhance phytoplankton biomass and productivity. Here, changes in species dominance from E. huxleyi in summer, to C. pelagicus in fall, were related to the strengthened influence during summer, of surface AW, as well as to high July solar irradiance, within an area usually characterized by C. pelagicus-dominated low density populations.

  15. The spectral optical properties and relative radiant heating contribution of dissolved and particulate matter in the surface waters across the Fram Strait

    Pavlov, A.K.; Granskog, M.A.; Stedmon, Colin

    autumns of 2009 and 2010 comprehensive observations were performed on transects along 79 N across the Fram Strait. Samples for chromophoric dissolved organic matter (CDOM) and particulate absorption were collected and analyzed together with distribution of temperature and salinity in surface waters (0......-100 m). Large spatial variations in the distribution of CDOM and particulate matter as well as in their relative contributions to total absorption were apparent, with high contrast between waters of Arctic and Atlantic origin. In addition, estimates of underwater light profiles and radiant heating rate...... (RHR) of the upper layer were obtained using a simplistic exponential RHR model. This is one of the first detailed overviews of sea water optical properties across the northern Fram Strait, and might have potential implications for biological, biogeochemical and physical processes in the region...

  16. The use of Functional Resonance Analysis Method (FRAM) in a mid-air collision to understand some characteristics of the air traffic management system resilience

    Rodrigues de Carvalho, Paulo Victor

    2011-01-01

    The Functional Resonance Analysis Model (FRAM) defines a systemic framework to model complex systems for accident analysis purposes. We use FRAM in the mid-air collision between flight GLO1907, a commercial aircraft Boeing 737-800, and flight N600XL, an executive jet EMBRAER E-145, to investigate key resilience characteristics of the Air Traffic Management System (ATM). This ATM system related accident occurred at 16:56 Brazilian time on September 29, 2006 in the Amazonian sky. FRAM analysis of flight monitoring functions showed system constraints (equipment, training, time, and supervision) that produce variability in system behavior, creating demand resources mismatches in an attempt to perceive and control the developing situation. This variability also included control and coordination breakdowns and automation surprises (TCAS functioning). The analysis showed that under normal variability conditions (without catastrophic failures) the ATM system (pilots, controllers, supervisors, and equipment) was not able to close the control loops of the flight monitoring functions using feedback or feedforward strategies to achieve an adequate control of an aircraft flying in the controlled air space. Our findings shed some light on the resilience of Brazilian ATM system operation and indicated that there is a need of a deeper understanding on how the system is actually functioning. - Highlights: → The Functional Resonance Analysis Model (FRAM) was used in a mid-air collision over Amazon. → The aim was to understand key resilience characteristics of the Air Traffic Management System (ATM). → The analysis showed how, under normal conditions, the system was not able to control flight functions. → The findings shed some light about the resilience of Brazilian ATM system operation.

  17. The use of Functional Resonance Analysis Method (FRAM) in a mid-air collision to understand some characteristics of the air traffic management system resilience

    Rodrigues de Carvalho, Paulo Victor, E-mail: paulov@ien.gov.br [National Nuclear Energy Commission/Nuclear Engineering Institute, Cidade Universitaria-Ilha do Fundao, Rio de Janeiro, RJ 21945-970 (Brazil)

    2011-11-15

    The Functional Resonance Analysis Model (FRAM) defines a systemic framework to model complex systems for accident analysis purposes. We use FRAM in the mid-air collision between flight GLO1907, a commercial aircraft Boeing 737-800, and flight N600XL, an executive jet EMBRAER E-145, to investigate key resilience characteristics of the Air Traffic Management System (ATM). This ATM system related accident occurred at 16:56 Brazilian time on September 29, 2006 in the Amazonian sky. FRAM analysis of flight monitoring functions showed system constraints (equipment, training, time, and supervision) that produce variability in system behavior, creating demand resources mismatches in an attempt to perceive and control the developing situation. This variability also included control and coordination breakdowns and automation surprises (TCAS functioning). The analysis showed that under normal variability conditions (without catastrophic failures) the ATM system (pilots, controllers, supervisors, and equipment) was not able to close the control loops of the flight monitoring functions using feedback or feedforward strategies to achieve an adequate control of an aircraft flying in the controlled air space. Our findings shed some light on the resilience of Brazilian ATM system operation and indicated that there is a need of a deeper understanding on how the system is actually functioning. - Highlights: > The Functional Resonance Analysis Model (FRAM) was used in a mid-air collision over Amazon. > The aim was to understand key resilience characteristics of the Air Traffic Management System (ATM). > The analysis showed how, under normal conditions, the system was not able to control flight functions. > The findings shed some light about the resilience of Brazilian ATM system operation.

  18. Characteristics of colored dissolved organic matter (CDOM) in the Arctic outflow in the Fram Strait: Assessing the changes and fate of terrigenous CDOM in the Arctic Ocean

    Granskog, M.A.; Stedmon, C.A.; Dodd, P.A.; Amon, R.M.W.; Pavlov, A.K.; de Steur, L.; Hansen, E.

    2012-01-01

    Absorption coefficients of colored dissolved organic matter (CDOM) were measured together with salinity, delta O-18, and inorganic nutrients across the Fram Strait. A pronounced CDOM absorption maximum between 30 and 120 m depth was associated with river and sea ice brine enriched water, characteristic of the Arctic mixed layer and upper halocline waters in the East Greenland Current (EGC). The lowest CDOM concentrations were found in the Atlantic inflow. We show that the salinity-CDOM relati...

  19. Cooperation on Improved Isotopic Identification and Analysis Software for Portable, Electrically Cooled High-Resolution Gamma Spectrometry Systems Final Report

    Dreyer, Jonathan G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wang, Tzu-Fang [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Vo, Duc T. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Funk, Pierre F. [Inst. for Radiological Protection and Nuclear Safety (IRSN), Fontenay-aux-Roses (France); Weber, Anne-Laure [Inst. for Radiological Protection and Nuclear Safety (IRSN), Fontenay-aux-Roses (France)

    2017-07-20

    Under a 2006 agreement between the Department of Energy (DOE) of the United States of America and the Institut de Radioprotection et de Sûreté Nucléaire (IRSN) of France, the National Nuclear Security Administration (NNSA) within DOE and IRSN initiated a collaboration to improve isotopic identification and analysis of nuclear material [i.e., plutonium (Pu) and uranium (U)]. The specific aim of the collaborative project was to develop new versions of two types of isotopic identification and analysis software: (1) the fixed-energy response-function analysis for multiple energies (FRAM) codes and (2) multi-group analysis (MGA) codes. The project is entitled Action Sheet 4 – Cooperation on Improved Isotopic Identification and Analysis Software for Portable, Electrically Cooled, High-Resolution Gamma Spectrometry Systems (Action Sheet 4). FRAM and MGA/U235HI are software codes used to analyze isotopic ratios of U and Pu. FRAM is an application that uses parameter sets for the analysis of U or Pu. MGA and U235HI are two separate applications that analyze Pu or U, respectively. They have traditionally been used by safeguards practitioners to analyze gamma spectra acquired with high-resolution gamma spectrometry (HRGS) systems that are cooled by liquid nitrogen. However, it was discovered that these analysis programs were not as accurate when used on spectra acquired with a newer generation of more portable, electrically cooled HRGS (ECHRGS) systems. In response to this need, DOE/NNSA and IRSN collaborated to update the FRAM and U235HI codes to improve their performance with newer ECHRGS systems. Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL) performed this work for DOE/NNSA.

  20. Order-disorder transition and electrical conductivity of the brownmillerite solid-solutions system Ba sub 2 (In, M) sub 2 O sub 5 (M=Ga, Al)

    Yamamura, H; Kakinuma, K; Mori, T; Haneda, H

    1999-01-01

    The brownmillerite solid-solution systems Ba sub 2 (In sub 1 sub - sub x M sub x) sub 2 O sub 5 (M=Ga, Al) were investigated by means of high-temperature X-ray diffraction (XRD), dilatometry, and electrical-conductivity measurements. XRD showed that the Ba sub 2 (In sub 1 sub - sub x Ga sub x) sub 2 O sub 5 system had orthorhombic symmetry in the composition range 0.0<=x<=0.2 and cubic symmetry in the range 0.3<=x. The Al system also changed to cubic symmetry from orthorhombic symmetry in the range 0.2<=x. While the orthorhombic phase showed an order-disorder transition in the electrical conductivity measurements, the transition temperature decreased with increasing the M content. The order-disorder transition temperature and the crystal-structure transition temperature were very different. Such a transition was not observed in the cubic phases, and their electrical conductivity were fairly low compared to those of the disordered cubic phase after the transition due to the heating process. These p...

  1. Order-disorder transition and electrical conductivity of the brownmillerite solid-solutions system Ba2(In, M)2O5 (M=Ga, Al)

    Yamamura, Hiroshi; Hamazaki, Hirohumi; Kakinuma, Katsuyoshi; Mori, Toshiyuki; Haneda, Hajime

    1999-01-01

    The brownmillerite solid-solution systems Ba 2 (In 1-x M x ) 2 O 5 (M=Ga, Al) were investigated by means of high-temperature X-ray diffraction (XRD), dilatometry, and electrical-conductivity measurements. XRD showed that the Ba 2 (In 1-x Ga x ) 2 O 5 system had orthorhombic symmetry in the composition range 0.0≤x≤0.2 and cubic symmetry in the range 0.3≤x. The Al system also changed to cubic symmetry from orthorhombic symmetry in the range 0.2≤x. While the orthorhombic phase showed an order-disorder transition in the electrical conductivity measurements, the transition temperature decreased with increasing the M content. The order-disorder transition temperature and the crystal-structure transition temperature were very different. Such a transition was not observed in the cubic phases, and their electrical conductivity were fairly low compared to those of the disordered cubic phase after the transition due to the heating process. These phenomena are discussed in terms of disordering of the tetrahedral site in the brownmillerite structure, which is occupied by the smaller Ga 3+ or Al 3+ rather than ny In 3+

  2. Restrictions in Mg/Ca-Paleotemperature Estimations in High-Latitude Bottom Waters: Evidence from the Fram Strait and the Nordic Seas

    Werner, K.; Marchitto, T. M., Jr.; Not, C.; Spielhagen, R. F.; Husum, K.

    2014-12-01

    Mg to Ca ratios of the benthic foraminifer species Cibicidoides wuellerstorfi provide a great potential for reconstructing bottom water temperatures, especially from the lower end of the temperature range between 0 and 6°C (Tisserand et al., 2013). A set of core top samples from the Fram Strait and the Norwegian margin have been studied for Mg/Ca ratios in C. wuellerstorfi in order to establish a calibration relationship to the environmental conditions. In this part of the northern North Atlantic the bottom water temperature range between -0.5 and -1°C. For the calibration to modern water mass conditions, modern oceanographic data from both existing conductivity-temperature-depth (CTD) casts and the World Ocean Data Base 2013 (Boyer et al., 2013) have been used. Benthic Mg/Ca ratios are relatively high suggesting a preference of C. wuellerstorfi to incorporate Mg below 0°C. Although no correlation has been found to existing temperature calibrations, the data are in line with earlier Mg/Ca data from C. wuellerstorfi in the area (Martin et al., 2002; Elderfield et al., 2006). The carbonate ion effect is most likely a main cause for the relatively high Mg/Ca ratios found in core top samples from the Fram Strait and the Nordic Seas, however, other factors may influence the values as well. Holocene records of benthic trace metal/Ca ratios from the eastern Fram Strait display trends similar to those found in other proxy indicators, despite the difficulties to constrain a temperature calibration for this low temperature range. In particular, the benthic B/Ca and Li/Ca records resemble trends in Holocene planktic foraminifer assemblages, suggesting to be influenced by environmental factors such as the carbonate ion effect consistent for the entire water column.

  3. Spatial and temporal scales of sea ice protists and phytoplankton distribution from the gateway Fram Strait into the Central Arctic Ocean

    Peeken, I.; Hardge, K.; Krumpen, T.; Metfies, K.; Nöthig, E. M.; Rabe, B.; von Appen, W. J.; Vernet, M.

    2016-02-01

    The Arctic Ocean is currently one of the key regions where the effect of climate change is most pronounced. Sea ice is an important interface in this region by representing a unique habitat for many organisms. Massive reduction of sea ice thickness and extent, which have been recorded over the last twenty years, is anticipated to cause large cascading changes in the entire Arctic ecosystem. Most sea ice is formed on the Eurasian shelves and transported via the Transpolardrift to the western Fram Strait and out of the Arctic Ocean with the cold East Greenland Current (EGC). Warm Atlantic water enters the Arctic Ocean with the West Spitsbergen Current (WSC) via eastern Fram Strait. Here, we focus on the spatial spreading of protists from the Atlantic water masses, and their occurrences over the deep basins of the Central Arctic and the relationship amongst them in water and sea ice. Communities were analyzed by using pigments, flow cytometer and ARISA fingerprints during several cruises with the RV Polarstern to the Fram Strait, the Greenland Sea and the Central Arctic Ocean. By comparing these data sets we are able to demonstrate that the origin of the studied sea ice floes is more important for the biodiversity found in the sea ice communities then the respective underlying water mass. In contrast, biodiversity in the water column is mainly governed by the occurring water masses and the presence or absence of sea ice. However, overall the development of standing stocks in both biomes was governed by the availability of nutrients. To get a temporal perspective of the recent results, the study will be embedded in a long-term data set of phytoplankton biomass obtained during several cruises over the last twenty years.

  4. The human intrinsic factor-vitamin B12 receptor, cubilin: molecular characterization and chromosomal mapping of the gene to 10p within the autosomal recessive megaloblastic anemia (MGA1) region

    Kozyraki, R; Kristiansen, M; Silahtaroglu, A

    1998-01-01

    -5445 on the short arm of chromosome 10. This is within the autosomal recessive megaloblastic anemia (MGA1) 6-cM region harboring the unknown recessive-gene locus of juvenile megaloblastic anemia caused by intestinal malabsorption of cobalamin (Imerslund-Gräsbeck's disease). In conclusion, the present...... molecular and genetic information on human cubilin now provides circumstantial evidence that an impaired synthesis, processing, or ligand binding of cubilin is the molecular background of this hereditary form of megaloblastic anemia. Udgivelsesdato: 1998-May-15...

  5. The compositional change of Fluorescent Dissolved Organic Matter across Fram Strait assessed with use of a multi channel in situ fluorometer.

    Raczkowska, A.; Kowalczuk, P.; Sagan, S.; Zabłocka, M.; Pavlov, A. K.; Granskog, M. A.; Stedmon, C. A.

    2016-02-01

    Observations of Colored Dissolved Organic Matter absorption (CDOM) and fluorescence (FDOM) from water samples and an in situ fluorometer and of Inherent Optical Properties (IOP; light absorption and scattering) were carried out along a section across Fram Strait at 79°N. A 3 channel Wetlabs Wetstar fluorometer was deployed, with channels for humic- and protein-like DOM and used to assess distribution of different FDOM fractions. A relationship between fluorescence intensity of the protein-like fraction of FDOM and chlorophyll a fluorescence was found and indicated the importance of phytoplankton biomass in West Spitsbergen Current waters as a significant source of protein-like FDOM. East Greenland Current waters has low concentration of chlorophyll a, and were characterized by high humic-like FDOM fluorescence. An empirical relationship between humic-like FDOM fluorescence intensity and CDOM absorption was derived and confirms the dominance of terrigenous like CDOM on the composition of DOM in the East Greenland Current. These high resolution profile data offer a simple approach to fractionate the contribution of these two DOM source to DOM across the Fram Strait and may help refine estimates of DOC fluxes in and out of the Arctic through this region.

  6. Development of a code for the isotopic analysis of Uranium

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2013-05-15

    To strengthen the national nuclear nonproliferation regime by an establishment of nuclear forensic system, the techniques for nuclear material analysis and the categorization of important domestic nuclear materials are being developed. MGAU and FRAM are commercial software for the isotopic analysis of Uranium by using γ-spectroscopy, but the diversity of detection geometry and some effects - self attenuation, coincidence summing, etc. - suggest an analysis tool under continual improvement and modification. Hence, developing another code for HPGe γ- and x-ray spectrum analysis is started in this study. The analysis of the 87-101 keV region of Uranium spectrum is attempted based on the isotopic responses similar to those developed in MGAU. The code for isotopic analysis of Uranium is started from a fitting.

  7. Objev nové ELL proměnné hvězdy v souhvězdí Kentaura a možnost detekce nových exoplanet pomocí dalekohledu FRAM

    Pintr, Pavel; Vápenka, David; Mašek, M.

    2015-01-01

    Roč. 60, č. 2 (2015), s. 65-68 ISSN 0447-6441 R&D Projects: GA MŠk(CZ) LO1206; GA ČR GA13-10365S Institutional support: RVO:61389021 Keywords : variable star * light curve * FRAM * period analysis * exoplanet transit Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics http://jmo.fzu.cz/

  8. Code Cactus; Code Cactus

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  9. Characteristics of colored dissolved organic matter (CDOM) in the Arctic outflow in Fram Strait: assessing the changes and fate of terrigenous CDOM in the Arctic Ocean

    Granskog, M.A.; Stedmon, Colin; Dodd, P.A.

    2012-01-01

    Absorption coefficients of colored dissolved organic matter (CDOM) were measured together with salinity, δ18O, and inorganic nutrients across the Fram Strait. A pronounced CDOM absorption maximum between 30 and 120 m depth was associated with river and sea ice brine enriched water, characteristic...... of the Arctic mixed layer and upper halocline waters in the East Greenland Current (EGC). The lowest CDOM concentrations were found in the Atlantic inflow. We show that the salinity-CDOM relationship is not suitable for evaluating conservative mixing of CDOM. The strong correlation between meteoric water...... and CDOM is indicative of the riverine/terrigenous origin of CDOM in the EGC. Based on CDOM absorption in Polar Water and comparison with an Arctic river discharge weighted mean, we estimate that a 49–59% integrated loss of CDOM absorption across 250–600 nm has occurred. A preferential removal...

  10. Integration of radiation protection in occupational health and safety managementsystems - legal requirements and practical realization at the example of the Fraunhofer occupational health and safety management system FRAM

    Lambotte, S.; Severitt, S.; Weber, U.

    2002-01-01

    The protection of the employees, the people and the environment for the effects of radiation is regulated by numerous laws and rules set by the government and the occupational accident insurances. Primarily these rules apply for the responsibles, normally the employer, as well as for the safety officers. Occupational safety management systems can support these people to carry out their tasks and responsibilities effectively. Also, a systematic handling of the organisation secures that the numerous duties of documentation, time-checking of the proof-lists and dates are respected. Further more, the legal certainty for the responsibles and safety officers will be raised and the occupational, environment, radiation and health protection will be promoted. At the example of the Fraunhofer occupational safety management system (FrAM) it is demonstrated, how radiation protection (ionizing radiation) can be integrated in a progressive intranet supported management system. (orig.)

  11. Pagsusuri sa mga Balyung Nakapaloob sa mga Salawikain ng mga Tiruray Sa South Upi, Maguindanao, Philippines

    Maria Luz D. Calibayan

    2015-12-01

    Full Text Available This study sought to collect, record, translate, and analyze the Tiruray proverbs. Specifically the study aimed to find out the following: (1 What are the existing proverbs of Tiruray in South Upi, Maguindanao? (2What are the values found as expressed in the proverbs of Tiruraythat have been successfully preserved from their ancestors? The scope of this study was confined to the collected proverbs of Tiruray in South Upi, Maguindanao. The gatheredproverbs were transcribed from Tiruray to Filipino languageand the analysis of values from the translated proverbs was in accordance with the ideas of Andres (1985 and Timbreza (2003. The research design adopted in the study is descriptive content analysis because the main objective of the study was to analyse the values found in Tiruray proverbs. Findings revealed that the Tiruray have rich in oral literary piecessuch as proverbs which are transmitted by word of mouth from generation to generation. The Tiruray proverbs contain differenthuman values that teach or remind people how to live godly lives. Further, Tiruray proverbs conveyed message on how they value peace and harmony in the community for they are peace-loving people. The study of Tiruray proverbs could increase the body of knowledge about the cultural traits of Tiruray who are unique people and have rich cultural heritage.

  12. Hvordan kan teknologi skape nye undervisnings- og læringsmåter i fremmedspråksundervisningen fram mot 2030?

    Eli-Marie Danbolt Drange

    2014-09-01

    Full Text Available I denne artikkelen drøfter jeg hvordan teknologi kan skape nye undervisnings- og læringsmåter i fremmedspråksundervisningen fram mot 2030. Jeg starter med å skissere et mulig framtidsscenario i form av et blogginnlegg skrevet av en 15 åring i året 2030. Videre i artikkelen tar jeg utgangspunkt i dette framtidsscenarioet og sammenligner det med dagens situasjon, i tillegg til at jeg drøfter hva som må til for at scenarioet kan oppfylles. Læreren spiller en nøkkelrolle i utviklingen av nye undervisnings- og læringsmåter, og det er først når læreren integrerer teknologien i sin undervisning at nye praksiser oppstår. Jeg viser noen konkrete eksempler på bruk av teknologi på nye måter, samt refleksjon rundt utviklingen framover.

  13. Operation of a Hovercraft Scientific Platform Over Sea Ice in the Arctic Ocean Transpolar Drift (81 - 85N): The FRAM-2012 Experience

    Hall, J. K.; Kristoffersen, Y.

    2013-12-01

    We have tested the feasibility of hovercraft travel through predominantly first year ice of the Transpolar Drift between 81°N - 85°N north of Svalbard. With 2-9 ridges per kilometer, our hovercraft (Griffon TD2000 Mark II), with an effective hover height of about 0.5 m, had to travel a distance 1.3 times the great circle distance between the point of origin and the final destination. Instantaneous speeds were mostly 5-7 knots. Two weeks later icebreaker Oden completed the same transit under conditions with no significant pressure in the ice at a speed mostly 1 knot higher than the hovercraft and travelled 1.2 times the great circle distance. The hovercraft spent 25 days monitoring micro-earthquake activity of the Arctic Mid-Ocean Ridge at a section of the spreading center where no seismicity has been recorded by the global seismograph network. More than ten small earthquake events per day were recorded. Visibility appears to be the most critical factor to hovercraft travel in polar pack ice. Improved control of hovercraft motion would substantially increase the potential usefulness of hovercraft in the sea ice environment. University of Bergen graduate student Gaute Hope emplacing one of the hydrophones in the triangular array used to locate small earthquakes over the Gakkel Ridge rift valley around 85N during FRAM-2012. The research hovercraft R/H SABVABAA is in the background.

  14. Coding Partitions

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  15. Bathymetric patterns in standing stock and diversity of deep-sea nematodes at the long-term ecological research observatory HAUSGARTEN (Fram Strait)

    Grzelak, Katarzyna; Kotwicki, Lech; Hasemann, Christiane; Soltwedel, Thomas

    2017-08-01

    Bathymetric patterns in standing stocks and diversity are a major topic of investigation in deep-sea biology. From the literature, responses of metazoan meiofauna and nematodes to bathymetric gradients are well studied, with a general decrease in biomass and abundance with increasing water depth, while bathymetric diversity gradients often, although it is not a rule, show a unimodal pattern. Spatial distribution patterns of nematode communities along bathymetric gradients are coupled with surface-water processes and interacting physical and biological factors within the benthic system. We studied the nematode communities at the Long-Term Ecological Research (LTER) observatory HAUSGARTEN, located in the Fram Strait at the Marginal Ice Zone, with respect to their standing stocks as well as structural and functional diversity. We evaluated whether nematode density, biomass and diversity indices, such as H0, Hinf, EG(50), Θ- 1, are linked with environmental conditions along a bathymetric transect spanning from 1200 m to 5500 m water depth. Nematode abundance, biomass and diversity, as well as food availability from phytodetritus sedimentation (indicated by chloroplastic pigments in the sediments), were higher at the stations located at upper bathyal depths (1200-2000 m) and tended to decrease with increasing water depth. A faunal shift was found below 3500 m water depth, where genus composition and trophic structure changed significantly and structural diversity indices markedly decreased. A strong dominance of very few genera and its high turnover particularly at the abyssal stations (4000-5500 m) suggests that environmental conditions were rather unfavorable for most genera. Despite the high concentrations of sediment-bound chloroplastic pigments and elevated standing stocks found at the deepest station (5500 m), nematode genus diversity remained the lowest compared to all other stations. This study provides a further insight into the knowledge of deep-sea nematodes

  16. Speaking Code

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  17. Coding Labour

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  18. Speech coding

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  19. Optimal codes as Tanner codes with cyclic component codes

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. Aztheca Code

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  1. Vocable Code

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  2. NSURE code

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  3. The Aster code; Code Aster

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  4. Coding Class

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  5. Uplink Coding

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  6. ANIMAL code

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  7. Network Coding

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  8. MCNP code

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  9. Expander Codes

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  10. Technology development for nuclear material measurement and accountability

    Hong, Jong Sook; Lee, Byung Doo; Cha, Hong Ryul; Lee, Yong Duk; Choi, Hyung Nae; Nah, Won Woo; Park, Hoh Joon; Lee, Yung Kil [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1994-12-01

    The measurement techniques for Pu samples and spent fuel assembly were developed in support of the implementation of national inspection responsibility under the Atomic Energy Act promulgated in 1994 and a computer program was also developed to assess the total nuclear material balance by facility declared records. The results of plutonium isotopic determination by gamma-ray spectrometry with high resolution germanium detector with peak analysis codes (FRAM and MGA codes) were approached to within 1% {approx} 2% of error from chemical analysis values by mass spectrometry. A gamma-ray measurement system for underwater spent nuclear fuels was developed and tested successfully. The falsification of facility and state records can be traced with the help of the developed computer code against declared reports submitted by the concerned state. This activity eventually resulted in finding the discrepancy of accountability records. 18 figs, 20 tabs, 27 refs. (Author).

  11. Technology development for nuclear material measurement and accountability

    Hong, Jong Sook; Lee, Byung Doo; Cha, Hong Ryul; Lee, Yong Duk; Choi, Hyung Nae; Nah, Won Woo; Park, Hoh Joon; Lee, Yung Kil

    1994-12-01

    The measurement techniques for Pu samples and spent fuel assembly were developed in support of the implementation of national inspection responsibility under the Atomic Energy Act promulgated in 1994 and a computer program was also developed to assess the total nuclear material balance by facility declared records. The results of plutonium isotopic determination by gamma-ray spectrometry with high resolution germanium detector with peak analysis codes (FRAM and MGA codes) were approached to within 1% ∼ 2% of error from chemical analysis values by mass spectrometry. A gamma-ray measurement system for underwater spent nuclear fuels was developed and tested successfully. The falsification of facility and state records can be traced with the help of the developed computer code against declared reports submitted by the concerned state. This activity eventually resulted in finding the discrepancy of accountability records. 18 figs, 20 tabs, 27 refs. (Author)

  12. Late Pliocene/Pleistocene changes in Arctic sea-ice cover: Biomarker and dinoflagellate records from Fram Strait/Yermak Plateau (ODP Sites 911 and 912)

    Stein, Ruediger; Fahl, Kirsten; Matthiessen, Jens

    2014-05-01

    Sea ice is a critical component in the (global) climate system that contributes to changes in the Earth's albedo (heat reduction) and biological processes (primary productivity), as well as deep-water formation, a driving mechanism for global thermohaline circulation. Thus, understanding the processes controlling Arctic sea ice variability is of overall interest and significance. Recently, a novel and promising biomarker proxy for reconstruction of Arctic sea-ice conditions was developed and is based on the determination of a highly-branched isoprenoid with 25 carbons (IP25; Belt et al., 2007; PIP25 when combined with open-water phytoplankton biomarkers; Müller et al., 2011). Here, we present biomarker data from Ocean Drilling Program (ODP) Sites 911 and 912, recovered from the southern Yermak Plateau and representing information of sea-ice variability, changes in primary productivity and terrigenous input during the last about 3.5 Ma. As Sites 911 and 912 are close to the modern sea-ice edge, their sedimentary records seem to be optimal for studying past variability in sea-ice coverage and testing the applicability of IP25 and PIP25 in older sedimentary sequences. In general, our biomarker records correlate quite well with other climate and sea-ice proxies (e.g., dinoflagellates, IRD, etc.). The main results can be summarized as follows: (1) The novel IP25/PIP25 biomarker approach has potential for semi-quantitative paleo-sea ice studies covering at least the last 3.5 Ma, i.e., the time interval including the onset (intensification) of major Northern Hemisphere Glaciation (NHG). (2) These data indicate that sea ice of variable extent was present in the Fram Strait/southern Yermak Plateau area during most of the time period under investigation. (3) Elevated IP25/PIP25 values indicative for an extended spring sea-ice cover, already occurred between 3.6 and 2.9 Ma, i.e., prior to the onset of major NHG. This may suggest that sea-ice and related albedo effects might

  13. Panda code

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  14. CANAL code

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  15. From concatenated codes to graph codes

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  16. Automatic coding method of the ACR Code

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  17. Error-correction coding

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  18. Dynamic Shannon Coding

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  19. Fundamentals of convolutional coding

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  20. Codes Over Hyperfields

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  1. Vector Network Coding Algorithms

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  2. Homological stabilizer codes

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  3. Diagnostic Coding for Epilepsy.

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  4. Coding of Neuroinfectious Diseases.

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  5. Sa Pusod ng Lungsod: Mga Alamat, Mga Kababalaghan Bilang Mitolohiyang Urban

    Eugene Y. Evasco

    2000-06-01

    Full Text Available Local urban legends such as narratives about supernatural occurrences and mythological characters-the manananggal of Tondo; white lady at Loakan Road, Baguio, and Balete Drive, Quezon City; the kambal-ahas of Robinson's Galleria and a mall in Davao City; pugot-ulo of Tagaytay City; engkantada of the market; tiyanak; mandaragit (Haring Kuto; salvation army of Calbayog City; the lost souls in the buildings which have existed since the Philippine Revolution and the World War II; and the anitos in mango and aratiles trees within the city-are intimately connected with the folk beliefs, philosophy, and mythology of the country. Interview results from various urbanized locales in the Philippines provide the context of the metaphor, motif, and images in the stories via a yearlong documentation and transcription. They also corroborate what sociologists and anthropologists have previously concluded, that the tensions due to urbanization, alienation, politics, technology, militarization, ecological degradation, and industrialization have been instrumental in the creation of modern folklore. The development of oral literature in the Philippines and the use of urban legends have effectively contributed to the production of popular culture and literature of the Philippines.

  6. Vector Network Coding

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  7. Entropy Coding in HEVC

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  8. Generalized concatenated quantum codes

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  9. Rateless feedback codes

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  10. Advanced video coding systems

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  11. Coding for dummies

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  12. Discussion on LDPC Codes and Uplink Coding

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  13. Locally orderless registration code

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  14. Decoding Codes on Graphs

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  15. Manually operated coded switch

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  16. Coding in Muscle Disease.

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  17. QR Codes 101

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  18. Codes and curves

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  19. The materiality of Code

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  20. Coding for optical channels

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  1. SEVERO code - user's manual

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  2. Synthesizing Certified Code

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  3. FERRET data analysis code

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  4. Stylize Aesthetic QR Code

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  5. Enhancing QR Code Security

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  6. Opening up codings?

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  7. Gauge color codes

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  8. Refactoring test code

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  9. Software Certification - Coding, Code, and Coders

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  10. The network code

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  11. Coding for Electronic Mail

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  12. NAGRADATA. Code key. Geology

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  13. XSOR codes users manual

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  14. Reactor lattice codes

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  15. DLLExternalCode

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  16. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  17. An Optimal Linear Coding for Index Coding Problem

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  18. The Aesthetics of Coding

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  19. Majorana fermion codes

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  20. Theory of epigenetic coding.

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  1. DISP1 code

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  2. Phonological coding during reading.

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  3. The aeroelastic code FLEXLAST

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  4. MORSE Monte Carlo code

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  5. QR codes for dummies

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  6. Tokamak Systems Code

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  7. Sea Ice, Climate and Fram Strait

    Hunkins, K.

    1984-01-01

    When sea ice is formed the albedo of the ocean surface increases from its open water value of about 0.1 to a value as high as 0.8. This albedo change effects the radiation balance and thus has the potential to alter climate. Sea ice also partially seals off the ocean from the atmosphere, reducing the exchange of gases such as carbon dioxide. This is another possible mechanism by which climate might be affected. The Marginal Ice Zone Experiment (MIZEX 83 to 84) is an international, multidisciplinary study of processes controlling the edge of the ice pack in that area including the interactions between sea, air and ice.

  8. Efficient Coding of Information: Huffman Coding -RE ...

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  9. NR-code: Nonlinear reconstruction code

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  10. Synthesizing Certified Code

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  11. Code of Ethics

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  12. Interleaved Product LDPC Codes

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  13. Insurance billing and coding.

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  14. Error Correcting Codes

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  15. Scrum Code Camps

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  16. RFQ simulation code

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  17. Error Correcting Codes

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  18. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  19. Validation of thermalhydraulic codes

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  20. Fracture flow code

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  1. Huffman coding in advanced audio coding standard

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  2. Report number codes

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  3. Report number codes

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  4. Cryptography cracking codes

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  5. Coded Splitting Tree Protocols

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  6. Transport theory and codes

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  7. Gravity inversion code

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  8. Fulcrum Network Codes

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  9. Supervised Convolutional Sparse Coding

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  10. SASSYS LMFBR systems code

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  11. OCA Code Enforcement

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  12. The fast code

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  13. Code Disentanglement: Initial Plan

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  14. Induction technology optimization code

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  15. VT ZIP Code Areas

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  16. Bandwidth efficient coding

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  17. Reactor lattice codes

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  18. Critical Care Coding for Neurologists.

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  19. Lattice Index Coding

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  20. Towards advanced code simulators

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  1. Cracking the Gender Codes

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  2. PEAR code review

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  3. KENO-V code

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  4. Code, standard and specifications

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  5. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  6. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  7. Nuclear code abstracts (1975 edition)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  8. Some new ternary linear codes

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  9. ACE - Manufacturer Identification Code (MID)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  10. Algebraic and stochastic coding theory

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  11. Optical coding theory with Prime

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  12. The Aster code

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  13. Adaptive distributed source coding.

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  14. Speech coding code- excited linear prediction

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  15. Spatially coded backscatter radiography

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  16. Aztheca Code; Codigo Aztheca

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  17. The Coding Question.

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Revised SRAC code system

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  19. Code query by example

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  20. The correspondence between projective codes and 2-weight codes

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  1. Visualizing code and coverage changes for code review

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  2. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  3. Code of Medical Ethics

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  4. Supervised Convolutional Sparse Coding

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  5. CONCEPT computer code

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  6. Principles of speech coding

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  7. Evaluation Codes from an Affine Veriety Code Perspective

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  8. Dual Coding in Children.

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  9. Physical layer network coding

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  10. Radioactive action code

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  11. Building Codes and Regulations.

    Fisher, John L.

    The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)

  12. Physics of codes

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  13. Reliability and code level

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  14. Ready, steady… Code!

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  15. CERN Code of Conduct

    Department, HR

    2010-01-01

    The Code is intended as a guide in helping us, as CERN contributors, to understand how to conduct ourselves, treat others and expect to be treated. It is based around the five core values of the Organization. We should all become familiar with it and try to incorporate it into our daily life at CERN.

  16. Nuclear safety code study

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  17. Revised C++ coding conventions

    Callot, O

    2001-01-01

    This document replaces the note LHCb 98-049 by Pavel Binko. After a few years of practice, some simplification and clarification of the rules was needed. As many more people have now some experience in writing C++ code, their opinion was also taken into account to get a commonly agreed set of conventions

  18. Corporate governance through codes

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  19. Error Correcting Codes -34 ...

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  20. Broadcast Coded Slotted ALOHA

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives ...

  1. Software Defined Coded Networking

    Di Paola, Carla; Roetter, Daniel Enrique Lucani; Palazzo, Sergio

    2017-01-01

    the quality of each link and even across neighbouring links and using simulations to show that an additional reduction of packet transmission in the order of 40% is possible. Second, to advocate for the use of network coding (NC) jointly with software defined networking (SDN) providing an implementation...

  2. New code of conduct

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  3. (Almost) practical tree codes

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  4. Decoding Codes on Graphs

    having a probability Pi of being equal to a 1. Let us assume ... equal to a 0/1 has no bearing on the probability of the. It is often ... bits (call this set S) whose individual bits add up to zero ... In the context of binary error-correct~ng codes, specifi-.

  5. The Redox Code.

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  6. Z₂-double cyclic codes

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  7. Coding for urologic office procedures.

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Essential idempotents and simplex codes

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  9. Rate-adaptive BCH codes for distributed source coding

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  10. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  11. Ang Social Network sa Facebook ng mga Taga-Batangas at ng mga Taga-Laguna: Isang Paghahambing

    Jaderick P. Pabico

    2013-12-01

    Full Text Available Online social networking (OSN has become of great influence to Filipinos, where Facebook, Twitter, LinkedIn, Google+, and Instagram are among the popular ones. Their popularity, coupled with their intuitive and interactive use, allow one's personal information such as gender, age, address, relationship status, and list of friends to become publicly available. The accessibility of information from these sites allow, with the aid of computers, for the study of a wide population's characteristics even in a provincial scale. Aside from being neighbouring locales, the respective residents of Laguna and Batangas both derive their livelihoods from two lakes, Laguna de Bay and Taal Lake. Both residents experience similar problems, such as that, among many others, of fish kill. The goal of this research is to find out similarities in their respective online populations, particularly that of Facebook's. With the use of computational dynamic social network analysis (CDSNA, we found out that the two communities are similar, among others, as follows: both populations are dominated by single young female; Homophily was observed when choosing a friend in terms of age (i.e., friendships were created more often between people whose ages do not differ by at most five years; and Heterophily was observed when choosing friends in terms of gender (i.e., more friendships were created between a male and a female than between both people of the same gender. This paper also presents the differences in the structure of the two social networks, such as degrees of separation and preferential attachment.

  12. Efficient convolutional sparse coding

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  13. Coded Network Function Virtualization

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  14. The NIMROD Code

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  15. Computer code FIT

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  16. Discrete Sparse Coding.

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  17. Code of Practice

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  18. Tokamak simulation code manual

    Chung, Moon Kyoo; Oh, Byung Hoon; Hong, Bong Keun; Lee, Kwang Won

    1995-01-01

    The method to use TSC (Tokamak Simulation Code) developed by Princeton plasma physics laboratory is illustrated. In KT-2 tokamak, time dependent simulation of axisymmetric toroidal plasma and vertical stability have to be taken into account in design phase using TSC. In this report physical modelling of TSC are described and examples of application in JAERI and SERI are illustrated, which will be useful when TSC is installed KAERI computer system. (Author) 15 refs., 6 figs., 3 tabs

  19. Status of MARS Code

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  20. Codes of Good Governance

    Beck Jørgensen, Torben; Sørensen, Ditte-Lene

    2013-01-01

    Good governance is a broad concept used by many international organizations to spell out how states or countries should be governed. Definitions vary, but there is a clear core of common public values, such as transparency, accountability, effectiveness, and the rule of law. It is quite likely......, transparency, neutrality, impartiality, effectiveness, accountability, and legality. The normative context of public administration, as expressed in codes, seems to ignore the New Public Management and Reinventing Government reform movements....

  1. Orthopedics coding and funding.

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  2. Code Modernization of VPIC

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  3. MELCOR computer code manuals

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  4. MELCOR computer code manuals

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR's phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package

  5. Quality Improvement of MARS Code and Establishment of Code Coupling

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  6. Design of convolutional tornado code

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  7. Random linear codes in steganography

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  8. Containment Code Validation Matrix

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  9. Metal ion displacements in noncentrosymmetric chalcogenides La{sub 3}Ga{sub 1.67}S{sub 7}, La{sub 3}Ag{sub 0.6}GaCh{sub 7} (Ch=S, Se), and La{sub 3}MGaSe{sub 7} (M=Zn, Cd)

    Iyer, Abishek K. [Department of Chemistry, University of Alberta, Edmonton, Alberta, Canada T6G2G2 (Canada); Yin, Wenlong [Department of Chemistry, University of Alberta, Edmonton, Alberta, Canada T6G2G2 (Canada); Institute of Chemical Materials, China Academy of Engineering Physics, Mianyang 621900 (China); Rudyk, Brent W. [Department of Chemistry, University of Alberta, Edmonton, Alberta, Canada T6G2G2 (Canada); Lin, Xinsong [Department of Chemistry, University of Alberta, Edmonton, Alberta, Canada T6G2G2 (Canada); Centre for Oil Sands Sustainability, Northern Alberta Institute of Technology, Edmonton, Alberta, Canada T6N1E5 (Canada); Nilges, Tom [Department of Chemistry, Technical University of Munich, 85748 Garching b. München (Germany); Mar, Arthur, E-mail: arthur.mar@ualberta.ca [Department of Chemistry, University of Alberta, Edmonton, Alberta, Canada T6G2G2 (Canada)

    2016-11-15

    The quaternary Ga-containing chalcogenides La{sub 3}Ag{sub 0.6}GaS{sub 7}, La{sub 3}Ag{sub 0.6}GaSe{sub 7}, La{sub 3}ZnGaSe{sub 7}, and La{sub 3}CdGaSe{sub 7}, as well as the related ternary chalcogenide La{sub 3}Ga{sub 1.67}S{sub 7}, were prepared by reactions of the elements at 950 °C. They adopt noncentrosymmetric hexagonal structures (space group P6{sub 3}, Z=2) with cell parameters (a=10.2 Å, c=6.1 Å for the sulfides; a=10.6 Å, c=6.4 Å for the selenides) that are largely controlled by the geometrical requirements of one-dimensional stacks of Ga-centered tetrahedra separated by the La atoms. Among these compounds, which share the common formulation La{sub 3}M{sub 1–x}GaCh{sub 7} (M=Ga, Ag, Zn, Cd; Ch=S, Se), the M atoms occupy sites within a stacking of trigonal antiprisms formed by Ch atoms. The location of the M site varies between extremes with trigonal antiprismatic (CN6) and trigonal planar (CN3) geometry. Partial occupation of these sites and intermediate ones accounts for the considerable versatility of these structures and the occurrence of large metal displacement parameters. The site occupations can be understood in a simple way as being driven by the need to satisfy appropriate bond valence sums for both the M and Ch atoms. Band structure calculations rationalize the substoichiometry observed in the Ag-containing compounds (La{sub 3}Ag{sub 0.6}GaS{sub 7}, La{sub 3}Ag{sub 0.6}GaSe{sub 7}) as a response to overbonding. X-ray photoelectron spectroscopy supports the presence of monovalent Ag atoms in these compounds, which are not charge-balanced. - Graphical abstract: Partial occupation of metal atoms in multiple sites accounts for versatility in Ga-containing chalcogenides La{sub 3}M{sub 1–x}GaCh{sub 7} with noncentrosymmetric hexagonal structures. - Highlights: • La{sub 3}M{sub 1–x}GaCh{sub 7} (M =Ga, Ag, Zn, Cd; Ch =S, Se) adopt related hexagonal structures. • Large displacements of M atoms originate from partial occupation of multiple

  10. Decoding of concatenated codes with interleaved outer codes

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  11. TASS code topical report. V.1 TASS code technical manual

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  12. Construction of new quantum MDS codes derived from constacyclic codes

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  13. Optimization of Nuclear Reactor power Distribution using Genetic Algorithm

    Kim, Hyu Chan

    1996-02-01

    The main purpose of study is to develop a computer code named as 'MGA-SCOUPE' which can determine an optimal fuel-loading pattern for the nuclear reactor. The developed code, MGA-SCOUPE, automatically lots of searches for the globally optimum solutions based upon the modified Genetic Algorithm(MGA). The optimization goal of the MGA-SCOUPE is (1) the minimization of the deviations in the power peaking factors both at BOC and EOC, and (2) the maximization of the average burnup ration at EOC of the total fuel assemblies. For the reactor core calculation module in the MGA-SCOUPE, the SCOUPE code was partially modified and used. It had been developed originally in MIT and has been used currently in Kyung Hee University. The application of the MGA-SCOUPE to KORI 4-4 Cycle Model show several satisfactory results. Among them, two dominant improvements compared with the SCOUPE code can be summarized as follow: - The MGA-SCOUPE removes the user-dependency problem of the SCOUPE in the optimal loading pattern searches. Therefore, the searching process in the MGA-SCOUPE can be easily automated. - The final fuel loading pattern obtained by the MGA-SCOUPE shows 25.8%, 18.7% reduced standard deviations of the power peaking factors both at BOC and EOC, and 45% increased avg. burnup ratio at EOC compare with those of the SCOUPE

  14. Combinatorial neural codes from a mathematical coding theory perspective.

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  15. Convolutional coding techniques for data protection

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  16. High Energy Transport Code HETC

    Gabriel, T.A.

    1985-09-01

    The physics contained in the High Energy Transport Code (HETC), in particular the collision models, are discussed. An application using HETC as part of the CALOR code system is also given. 19 refs., 5 figs., 3 tabs

  17. Code stroke in Asturias.

    Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S

    2016-04-01

    Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  18. Decoding Xing-Ling codes

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  19. WWER reactor physics code applications

    Gado, J.; Kereszturi, A.; Gacs, A.; Telbisz, M.

    1994-01-01

    The coupled steady-state reactor physics and thermohydraulic code system KARATE has been developed and applied for WWER-1000 and WWER-440 operational calculations. The 3 D coupled kinetic code KIKO3D has been developed and validated for WWER-440 accident analysis applications. The coupled kinetic code SMARTA developed by VTT Helsinki has been applied for WWER-440 accident analysis. The paper gives a summary of the experience in code development and application. (authors). 10 refs., 2 tabs., 5 figs

  20. The path of code linting

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  1. The CORSYS neutronics code system

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  2. Bar codes for nuclear safeguards

    Keswani, A.N.; Bieber, A.M. Jr.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  3. Bar codes for nuclear safeguards

    Keswani, A.N.; Bieber, A.M.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially-available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  4. Quick response codes in Orthodontics

    Moidin Shakil

    2015-01-01

    Full Text Available Quick response (QR code codes are two-dimensional barcodes, which encodes for a large amount of information. QR codes in Orthodontics are an innovative approach in which patient details, radiographic interpretation, and treatment plan can be encoded. Implementing QR code in Orthodontics will save time, reduces paperwork, and minimizes manual efforts in storage and retrieval of patient information during subsequent stages of treatment.

  5. Multiple LDPC decoding for distributed source coding and video coding

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  6. Cinder begin creative coding

    Rijnieks, Krisjanis

    2013-01-01

    Presented in an easy to follow, tutorial-style format, this book will lead you step-by-step through the multi-faceted uses of Cinder.""Cinder: Begin Creative Coding"" is for people who already have experience in programming. It can serve as a transition from a previous background in Processing, Java in general, JavaScript, openFrameworks, C++ in general or ActionScript to the framework covered in this book, namely Cinder. If you like quick and easy to follow tutorials that will let yousee progress in less than an hour - this book is for you. If you are searching for a book that will explain al

  7. UNSPEC: revisited (semaphore code)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  8. The FLIC conversion codes

    Basher, J.C.

    1965-05-01

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  9. SPRAY code user's report

    Shire, P.R.

    1977-03-01

    The SPRAY computer code has been developed to model the effects of postulated sodium spray release from LMFBR piping within containment chambers. The calculation method utilizes gas convection, heat transfer and droplet combustion theory to calculate the pressure and temperature effects within the enclosure. The applicable range is 0-21 mol percent oxygen and .02-.30 inch droplets with or without humidity. Droplet motion and large sodium surface area combine to produce rapid heat release and pressure rise within the enclosed volume

  10. The FLIC conversion codes

    Basher, J C [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1965-05-15

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  11. Code Generation with Templates

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  12. Order functions and evaluation codes

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....

  13. Direct-semidirect (DSD) codes

    Cvelbar, F.

    1999-01-01

    Recent codes for direct-semidirect (DSD) model calculations in the form of answers to a detailed questionnaire are reviewed. These codes include those embodying the classical DSD approach covering only the transitions to the bound states (RAF, HIKARI, and those of the Bologna group), as well as the code CUPIDO++ that also treats transitions to unbound states. (author)

  14. Dual Coding, Reasoning and Fallacies.

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  15. Strongly-MDS convolutional codes

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  16. Lattice polytopes in coding theory

    Ivan Soprunov

    2015-05-01

    Full Text Available In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.

  17. Computer codes for safety analysis

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  18. Geochemical computer codes. A review

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  19. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  20. New quantum codes constructed from quaternary BCH codes

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  1. Quantum Codes From Cyclic Codes Over The Ring R 2

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  2. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  3. Converter of a continuous code into the Grey code

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  4. Supervised Transfer Sparse Coding

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  5. Two-terminal video coding.

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  6. The Art of Readable Code

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  7. Sub-Transport Layer Coding

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2014-01-01

    Packet losses in wireless networks dramatically curbs the performance of TCP. This paper introduces a simple coding shim that aids IP-layer traffic in lossy environments while being transparent to transport layer protocols. The proposed coding approach enables erasure correction while being...... oblivious to the congestion control algorithms of the utilised transport layer protocol. Although our coding shim is indifferent towards the transport layer protocol, we focus on the performance of TCP when ran on top of our proposed coding mechanism due to its widespread use. The coding shim provides gains...

  8. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  9. Polynomial weights and code constructions

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  10. Introduction of SCIENCE code package

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  11. Elements of algebraic coding systems

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  12. Physical Layer Network Coding

    Fukui, Hironori; Yomo, Hironori; Popovski, Petar

    2013-01-01

    of interfering nodes and usage of spatial reservation mechanisms. Specifically, we introduce a reserved area in order to protect the nodes involved in two-way relaying from the interference caused by neighboring nodes. We analytically derive the end-to-end rate achieved by PLNC considering the impact......Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause....../receive interference. The way to deal with this problem in distributed wireless networks is usage of MAC-layer mechanisms that make a spatial reservation of the shared wireless medium, similar to the well-known RTS/CTS in IEEE 802.11 wireless networks. In this paper, we investigate two-way relaying in presence...

  13. Concatenated quantum codes

    Knill, E.; Laflamme, R.

    1996-07-01

    One main problem for the future of practial quantum computing is to stabilize the computation against unwanted interactions with the environment and imperfections in the applied operations. Existing proposals for quantum memories and quantum channels require gates with asymptotically zero error to store or transmit an input quantum state for arbitrarily long times or distances with fixed error. This report gives a method which has the property that to store or transmit a qubit with maximum error {epsilon} requires gates with errors at most {ital c}{epsilon} and storage or channel elements with error at most {epsilon}, independent of how long we wish to store the state or how far we wish to transmit it. The method relies on using concatenated quantum codes and hierarchically implemented recovery operations. The overhead of the method is polynomial in the time of storage or the distance of the transmission. Rigorous and heuristic lower bounds for the constant {ital c} are given.

  14. Code des baux 2018

    Vial-Pedroletti, Béatrice; Kendérian, Fabien; Chavance, Emmanuelle; Coutan-Lapalus, Christelle

    2017-01-01

    Le code des baux 2018 vous offre un contenu extrêmement pratique, fiable et à jour au 1er août 2017. Cette 16e édition intègre notamment : le décret du 27 juillet 2017 relatif à l’évolution de certains loyers dans le cadre d’une nouvelle location ou d’un renouvellement de bail, pris en application de l’article 18 de la loi n° 89-462 du 6 juillet 1989 ; la loi du 27 janvier 2017 relative à l’égalité et à la citoyenneté ; la loi du 9 décembre 2016 relative à la transparence, à la lutte contre la corruption et à la modernisation de la vie économique ; la loi du 18 novembre 2016 de modernisation de la justice du xxie siècle

  15. GOC: General Orbit Code

    Maddox, L.B.; McNeilly, G.S.

    1979-08-01

    GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run

  16. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  17. Surface acoustic wave coding for orthogonal frequency coded devices

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  18. Entanglement-assisted quantum MDS codes from negacyclic codes

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  19. An algebraic approach to graph codes

    Pinero, Fernando

    This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...

  20. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  1. Self-complementary circular codes in coding theory.

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  2. Error correcting coding for OTN

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  3. Numerical Tokamak Project code comparison

    Waltz, R.E.; Cohen, B.I.; Beer, M.A.

    1994-01-01

    The Numerical Tokamak Project undertook a code comparison using a set of TFTR tokamak parameters. Local radial annulus codes of both gyrokinetic and gyrofluid types were compared for both slab and toroidal case limits assuming ion temperature gradient mode turbulence in a pure plasma with adiabatic electrons. The heat diffusivities were found to be in good internal agreement within ± 50% of the group average over five codes

  4. Ethical codes in business practice

    Kobrlová, Marie

    2013-01-01

    The diploma thesis discusses the issues of ethics and codes of ethics in business. The theoretical part defines basic concepts of ethics, presents its historical development and the methods and tools of business ethics. It also focuses on ethical codes and the area of law and ethics. The practical part consists of a quantitative survey, which provides views of selected business entities of business ethics and the use of codes of ethics in practice.

  5. QR code for medical information uses.

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  6. High Order Modulation Protograph Codes

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  7. Semi-supervised sparse coding

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  8. Programming Entity Framework Code First

    Lerman, Julia

    2011-01-01

    Take advantage of the Code First data modeling approach in ADO.NET Entity Framework, and learn how to build and configure a model based on existing classes in your business domain. With this concise book, you'll work hands-on with examples to learn how Code First can create an in-memory model and database by default, and how you can exert more control over the model through further configuration. Code First provides an alternative to the database first and model first approaches to the Entity Data Model. Learn the benefits of defining your model with code, whether you're working with an exis

  9. User manual of UNF code

    Zhang Jingshang

    2001-01-01

    The UNF code (2001 version) written in FORTRAN-90 is developed for calculating fast neutron reaction data of structure materials with incident energies from about 1 Kev up to 20 Mev. The code consists of the spherical optical model, the unified Hauser-Feshbach and exciton model. The man nal of the UNF code is available for users. The format of the input parameter files and the output files, as well as the functions of flag used in UNF code, are introduced in detail, and the examples of the format of input parameters files are given

  10. Semi-supervised sparse coding

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  11. Consensus Convolutional Sparse Coding

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  12. Coded aperture tomography revisited

    Bizais, Y.; Rowe, R.W.; Zubal, I.G.; Bennett, G.W.; Brill, A.B.

    1983-01-01

    Coded aperture (CA) Tomography never achieved wide spread use in Nuclear Medicine, except for the degenerate case of Seven Pinhole tomagraphy (7PHT). However it enjoys several attractive features (high sensitivity and tomographic ability with a statis detector). On the other hand, resolution is usually poor especially along the depth axis and the reconstructed volume is rather limited. Arguments are presented justifying the position that CA tomography can be useful for imaging time-varying 3D structures, if its major drawbacks (poor longitudinal resolution and difficulty in quantification) are overcome. Poor results obtained with 7PHT can be explained by both a very limited angular range sampled and a crude modelling of the image formation process. Therefore improvements can be expected by the use of a dual-detector system, along with a better understanding of its sampling properties and the use of more powerful reconstruction algorithms. Non overlapping multipinhole plates, because they do not involve a decoding procedure, should be considered first for practical applications. Use of real CA should be considered for cases in which non overlapping multipinhole plates do not lead to satisfactory solutions. We have been and currently are carrying out theoretical and experimental works, in order to define the factors which limit CA imaging and to propose satisfactory solutions for Dynamic Emission Tomography

  13. Mobile code security

    Ramalingam, Srikumar

    2001-11-01

    A highly secure mobile agent system is very important for a mobile computing environment. The security issues in mobile agent system comprise protecting mobile hosts from malicious agents, protecting agents from other malicious agents, protecting hosts from other malicious hosts and protecting agents from malicious hosts. Using traditional security mechanisms the first three security problems can be solved. Apart from using trusted hardware, very few approaches exist to protect mobile code from malicious hosts. Some of the approaches to solve this problem are the use of trusted computing, computing with encrypted function, steganography, cryptographic traces, Seal Calculas, etc. This paper focuses on the simulation of some of these existing techniques in the designed mobile language. Some new approaches to solve malicious network problem and agent tampering problem are developed using public key encryption system and steganographic concepts. The approaches are based on encrypting and hiding the partial solutions of the mobile agents. The partial results are stored and the address of the storage is destroyed as the agent moves from one host to another host. This allows only the originator to make use of the partial results. Through these approaches some of the existing problems are solved.

  14. Consensus Convolutional Sparse Coding

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  15. Coding, cryptography and combinatorics

    Niederreiter, Harald; Xing, Chaoping

    2004-01-01

    It has long been recognized that there are fascinating connections between cod­ ing theory, cryptology, and combinatorics. Therefore it seemed desirable to us to organize a conference that brings together experts from these three areas for a fruitful exchange of ideas. We decided on a venue in the Huang Shan (Yellow Mountain) region, one of the most scenic areas of China, so as to provide the additional inducement of an attractive location. The conference was planned for June 2003 with the official title Workshop on Coding, Cryptography and Combi­ natorics (CCC 2003). Those who are familiar with events in East Asia in the first half of 2003 can guess what happened in the end, namely the conference had to be cancelled in the interest of the health of the participants. The SARS epidemic posed too serious a threat. At the time of the cancellation, the organization of the conference was at an advanced stage: all invited speakers had been selected and all abstracts of contributed talks had been screened by the p...

  16. Consensus Convolutional Sparse Coding

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  17. Computer code abstract: NESTLE

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  18. Joint source-channel coding using variable length codes

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  19. Interrelations of codes in human semiotic systems.

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  20. Further Generalisations of Twisted Gabidulin Codes

    Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John

    2017-01-01

    We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....

  1. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  2. Allele coding in genomic evaluation

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  3. MARS Code in Linux Environment

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  4. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  5. Error-correction coding for digital communications

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  6. Distributed space-time coding

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  7. NETWORK CODING BY BEAM FORMING

    2013-01-01

    Network coding by beam forming in networks, for example, in single frequency networks, can provide aid in increasing spectral efficiency. When network coding by beam forming and user cooperation are combined, spectral efficiency gains may be achieved. According to certain embodiments, a method...... cooperating with the plurality of user equipment to decode the received data....

  8. Building codes : obstacle or opportunity?

    Alberto Goetzl; David B. McKeever

    1999-01-01

    Building codes are critically important in the use of wood products for construction. The codes contain regulations that are prescriptive or performance related for various kinds of buildings and construction types. A prescriptive standard might dictate that a particular type of material be used in a given application. A performance standard requires that a particular...

  9. Accelerator Physics Code Web Repository

    Zimmermann, Frank; Bellodi, G; Benedetto, E; Dorda, U; Giovannozzi, Massimo; Papaphilippou, Y; Pieloni, T; Ruggiero, F; Rumolo, G; Schmidt, F; Todesco, E; Zotter, Bruno W; Payet, J; Bartolini, R; Farvacque, L; Sen, T; Chin, Y H; Ohmi, K; Oide, K; Furman, M; Qiang, J; Sabbi, G L; Seidl, P A; Vay, J L; Friedman, A; Grote, D P; Cousineau, S M; Danilov, V; Holmes, J A; Shishlo, A; Kim, E S; Cai, Y; Pivi, M; Kaltchev, D I; Abell, D T; Katsouleas, Thomas C; Boine-Frankenheim, O; Franchetti, G; Hofmann, I; Machida, S; Wei, J

    2006-01-01

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic acceleratorphysics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  10. LFSC - Linac Feedback Simulation Code

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  11. Interleaver Design for Turbo Coding

    Andersen, Jakob Dahl; Zyablov, Viktor

    1997-01-01

    By a combination of construction and random search based on a careful analysis of the low weight words and the distance properties of the component codes, it is possible to find interleavers for turbo coding with a high minimum distance. We have designed a block interleaver with permutations...

  12. Code breaking in the pacific

    Donovan, Peter

    2014-01-01

    Covers the historical context and the evolution of the technically complex Allied Signals Intelligence (Sigint) activity against Japan from 1920 to 1945 Describes, explains and analyzes the code breaking techniques developed during the war in the Pacific Exposes the blunders (in code construction and use) made by the Japanese Navy that led to significant US Naval victories

  13. Development status of TUF code

    Liu, W.S.; Tahir, A.; Zaltsgendler

    1996-01-01

    An overview of the important development of the TUF code in 1995 is presented. The development in the following areas is presented: control of round-off error propagation, gas resolution and release models, and condensation induced water hammer. This development is mainly generated from station requests for operational support and code improvement. (author)

  14. Accident consequence assessment code development

    Homma, T.; Togawa, O.

    1991-01-01

    This paper describes the new computer code system, OSCAAR developed for off-site consequence assessment of a potential nuclear accident. OSCAAR consists of several modules which have modeling capabilities in atmospheric transport, foodchain transport, dosimetry, emergency response and radiological health effects. The major modules of the consequence assessment code are described, highlighting the validation and verification of the models. (author)

  15. The nuclear codes and guidelines

    Sonter, M.

    1984-01-01

    This paper considers problems faced by the mining industry when implementing the nuclear codes of practice. Errors of interpretation are likely. A major criticism is that the guidelines to the codes must be seen as recommendations only. They are not regulations. Specific clauses in the guidelines are criticised

  16. Survey of coded aperture imaging

    Barrett, H.H.

    1975-01-01

    The basic principle and limitations of coded aperture imaging for x-ray and gamma cameras are discussed. Current trends include (1) use of time varying apertures, (2) use of ''dilute'' apertures with transmission much less than 50%, and (3) attempts to derive transverse tomographic sections, unblurred by other planes, from coded images

  17. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  18. Grassmann codes and Schubert unions

    Hansen, Johan Peder; Johnsen, Trygve; Ranestad, Kristian

    2009-01-01

    We study subsets of Grassmann varieties over a field , such that these subsets are unions of Schubert cycles, with respect to a fixed flag. We study such sets in detail, and give applications to coding theory, in particular for Grassmann codes. For much is known about such Schubert unions with a ...

  19. On Network Coded Filesystem Shim

    Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2017-01-01

    Although network coding has shown the potential to revolutionize networking and storage, its deployment has faced a number of challenges. Usual proposals involve two approaches. First, deploying a new protocol (e.g., Multipath Coded TCP), or retrofitting another one (e.g., TCP/NC) to deliver bene...

  20. Running codes through the web

    Clark, R.E.H.

    2001-01-01

    Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report

  1. Dynamic benchmarking of simulation codes

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  2. Distributed source coding of video

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  3. Non-Protein Coding RNAs

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  4. Reliability-Based Code Calibration

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  5. What Froze the Genetic Code?

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  6. Verification of reactor safety codes

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  7. What Froze the Genetic Code?

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  8. Tristan code and its application

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  9. High efficiency video coding coding tools and specification

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  10. Detecting non-coding selective pressure in coding regions

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  11. Coding for effective denial management.

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    Nearly everyone will agree that accurate and consistent coding of diagnoses and procedures is the cornerstone for operating a compliant practice. The CPT or HCPCS procedure code tells the payor what service was performed and also (in most cases) determines the amount of payment. The ICD-9-CM diagnosis code, on the other hand, tells the payor why the service was performed. If the diagnosis code does not meet the payor's criteria for medical necessity, all payment for the service will be denied. Implementation of an effective denial management program can help "stop the bleeding." Denial management is a comprehensive process that works in two ways. First, it evaluates the cause of denials and takes steps to prevent them. Second, denial management creates specific procedures for refiling or appealing claims that are initially denied. Accurate, consistent and compliant coding is key to both of these functions. The process of proactively managing claim denials also reveals a practice's administrative strengths and weaknesses, enabling radiology business managers to streamline processes, eliminate duplicated efforts and shift a larger proportion of the staff's focus from paperwork to servicing patients--all of which are sure to enhance operations and improve practice management and office morale. Accurate coding requires a program of ongoing training and education in both CPT and ICD-9-CM coding. Radiology business managers must make education a top priority for their coding staff. Front office staff, technologists and radiologists should also be familiar with the types of information needed for accurate coding. A good staff training program will also cover the proper use of Advance Beneficiary Notices (ABNs). Registration and coding staff should understand how to determine whether the patient's clinical history meets criteria for Medicare coverage, and how to administer an ABN if the exam is likely to be denied. Staff should also understand the restrictions on use of

  12. ESCADRE and ICARE code systems

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  13. The ZPIC educational code suite

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  14. Stability analysis by ERATO code

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi

    1979-12-01

    Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)

  15. ETR/ITER systems code

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  16. ETR/ITER systems code

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  17. LFSC - Linac Feedback Simulation Code

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  18. Coded communications with nonideal interleaving

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  19. Coding and decoding for code division multiple user communication systems

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  20. GAMERA - The New Magnetospheric Code

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  1. SCALE Code System

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  2. Code-Mixing and Code Switchingin The Process of Learning

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  3. Neural Decoder for Topological Codes

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  4. Coding chaotic billiards. Pt. 3

    Ullmo, D.; Giannoni, M.J.

    1993-01-01

    Non-tiling compact billiard defined on the pseudosphere is studied 'a la Morse coding'. As for most bounded systems, the coding is non exact. However, two sets of approximate grammar rules can be obtained, one specifying forbidden codes, and the other allowed ones. In-between some sequences remain in the 'unknown' zone, but their relative amount can be reduced to zero if one lets the length of the approximate grammar rules goes to infinity. The relationship between these approximate grammar rules and the 'pruning front' introduced by Cvitanovic et al. is discussed. (authors). 13 refs., 10 figs., 1 tab

  5. Iterative nonlinear unfolding code: TWOGO

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  6. Atlas C++ Coding Standard Specification

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  7. Writing the Live Coding Book

    Blackwell, Alan; Cox, Geoff; Lee, Sang Wong

    2016-01-01

    This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide range...... of personal, historical, technical and critical perspectives. This book project has been both experimental and reflective, in a manner that allows us to draw on critical understanding of both code and writing, and point to the potential for new practices in the future....

  8. LiveCode mobile development

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  9. Network Coding Fundamentals and Applications

    Medard, Muriel

    2011-01-01

    Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil

  10. Linear network error correction coding

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  11. Tree Coding of Bilevel Images

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...

  12. Studies on DANESS Code Modeling

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  13. Beam-dynamics codes used at DARHT

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  14. Introduction to coding and information theory

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  15. Allegheny County Zip Code Boundaries

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the zip code boundaries that lie within Allegheny County.If viewing this description on the Western Pennsylvania Regional Data Center’s open...

  16. Ultrasound imaging using coded signals

    Misaridis, Athanasios

    Modulated (or coded) excitation signals can potentially improve the quality and increase the frame rate in medical ultrasound scanners. The aim of this dissertation is to investigate systematically the applicability of modulated signals in medical ultrasound imaging and to suggest appropriate...... methods for coded imaging, with the goal of making better anatomic and flow images and three-dimensional images. On the first stage, it investigates techniques for doing high-resolution coded imaging with improved signal-to-noise ratio compared to conventional imaging. Subsequently it investigates how...... coded excitation can be used for increasing the frame rate. The work includes both simulated results using Field II, and experimental results based on measurements on phantoms as well as clinical images. Initially a mathematical foundation of signal modulation is given. Pulse compression based...

  17. National Tribal Building Codes Summit

    National Tribal Building Codes summit statement developed to support tribes interested in adopting green and culturally-appropriate building systems to ensure safe, sustainable, affordable, and culturally-appropriate buildings on tribal lands.

  18. Tracking Code for Microwave Instability

    Heifets, S.; SLAC

    2006-01-01

    To study microwave instability the tracking code is developed. For bench marking, results are compared with Oide-Yokoya results [1] for broad-band Q = 1 impedance. Results hint to two possible mechanisms determining the threshold of instability

  19. CRUCIB: an axisymmetric convection code

    Bertram, L.A.

    1975-03-01

    The CRUCIB code was written in support of an experimental program aimed at measurement of thermal diffusivities of refractory liquids. Precise values of diffusivity are necessary to realistic analysis of reactor safety problems, nuclear waste disposal procedures, and fundamental metal forming processes. The code calculates the axisymmetric transient convective motions produced in a right circular cylindrical crucible, which is surface heated by an annular heat pulse. Emphasis of this report is placed on the input-output options of the CRUCIB code, which are tailored to assess the importance of the convective heat transfer in determining the surface temperature distribution. Use is limited to Prandtl numbers less than unity; larger values can be accommodated by replacement of a single block of the code, if desired. (U.S.)

  20. QUIL: a chemical equilibrium code

    Lunsford, J.L.

    1977-02-01

    A chemical equilibrium code QUIL is described, along with two support codes FENG and SURF. QUIL is designed to allow calculations on a wide range of chemical environments, which may include surface phases. QUIL was written specifically to calculate distributions associated with complex equilibria involving fission products in the primary coolant loop of the high-temperature gas-cooled reactor. QUIL depends upon an energy-data library called ELIB. This library is maintained by FENG and SURF. FENG enters into the library all reactions having standard free energies of reaction that are independent of concentration. SURF enters all surface reactions into ELIB. All three codes are interactive codes written to be used from a remote terminal, with paging control provided. Plotted output is also available

  1. Electronic Code of Federal Regulations

    National Archives and Records Administration — The Electronic Code of Federal Regulations (e-CFR) is the codification of the general and permanent rules published in the Federal Register by the executive...

  2. Zip Codes - MDC_WCSZipcode

    NSGIC Local Govt | GIS Inventory — The WCSZipcode polygon feature class was created by Miami-Dade Enterprise Technology Department to be used in the WCS batch jobs to assign the actual zip code of...

  3. The intercomparison of aerosol codes

    Dunbar, I.H.; Fermandjian, J.; Gauvain, J.

    1988-01-01

    The behavior of aerosols in a reactor containment vessel following a severe accident could be an important determinant of the accident source term to the environment. Various processes result in the deposition of the aerosol onto surfaces within the containment, from where they are much less likely to be released. Some of these processes are very sensitive to particle size, so it is important to model the aerosol growth processes: agglomeration and condensation. A number of computer codes have been written to model growth and deposition processes. They have been tested against each other in a series of code comparison exercises. These exercises have investigated sensitivities to physical and numerical assumptions and have also proved a useful means of quality control for the codes. Various exercises in which code predictions are compared with experimental results are now under way

  4. Automatic code generation in practice

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  5. Squares of Random Linear Codes

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  6. Cost reducing code implementation strategies

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  7. Adaptive decoding of convolutional codes

    K. Hueske

    2007-06-01

    Full Text Available Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  8. Covariance data processing code. ERRORJ

    Kosako, Kazuaki

    2001-01-01

    The covariance data processing code, ERRORJ, was developed to process the covariance data of JENDL-3.2. ERRORJ has the processing functions of covariance data for cross sections including resonance parameters, angular distribution and energy distribution. (author)

  9. Multimedia signal coding and transmission

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  10. NESTLE: A nodal kinetics code

    Al-Chalabi, R.M.; Turinsky, P.J.; Faure, F.-X.; Sarsour, H.N.; Engrand, P.R.

    1993-01-01

    The NESTLE nodal kinetics code has been developed for utilization as a stand-alone code for steady-state and transient reactor neutronic analysis and for incorporation into system transient codes, such as TRAC and RELAP. The latter is desirable to increase the simulation fidelity over that obtained from currently employed zero- and one-dimensional neutronic models and now feasible due to advances in computer performance and efficiency of nodal methods. As a stand-alone code, requirements are that it operate on a range of computing platforms from memory-limited personal computers (PCs) to supercomputers with vector processors. This paper summarizes the features of NESTLE that reflect the utilization and requirements just noted

  11. Description of the COMRADEX code

    Spangler, G.W.; Boling, M.; Rhoades, W.A.; Willis, C.A.

    1967-01-01

    The COMRADEX Code is discussed briefly and instructions are provided for the use of the code. The subject code was developed for calculating doses from hypothetical power reactor accidents. It permits the user to analyze four successive levels of containment with time-varying leak rates. Filtration, cleanup, fallout and plateout in each containment shell can also be analyzed. The doses calculated include the direct gamma dose from the containment building, the internal doses to as many as 14 organs including the thyroid, bone, lung, etc. from inhaling the contaminated air, and the external gamma doses from the cloud. While further improvements are needed, such as a provision for calculating doses from fallout, rainout and washout, the present code capabilities have a wide range of applicability for reactor accident analysis

  12. Adaptive decoding of convolutional codes

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  13. The Minimum Distance of Graph Codes

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  14. Verification of ONED90 code

    Chang, Jong Hwa; Lee, Ki Bog; Zee, Sung Kyun; Lee, Chang Ho

    1993-12-01

    ONED90 developed by KAERI is a 1-dimensional 2-group diffusion theory code. For nuclear design and reactor simulation, the usage of ONED90 encompasses core follow calculation, load follow calculation, plant power control simulation, xenon oscillation simulation and control rod maneuvering, etc. In order to verify the validity of ONED90 code, two well-known benchmark problems are solved by ONED90 shows very similar result to reference solution. (Author) 11 refs., 5 figs., 13 tabs

  15. Summary of Code of Ethics.

    Eklund, Kerri

    2016-01-01

    The Guide to the Code of Ethics for Nurses is an excellent guideline for all nurses regardless of their area of practice. I greatly enjoyed reading the revisions in place within the 2015 edition and refreshing my nursing conscience. I plan to always keep my Guide to the Code of Ethics for Nurses near in order to keep my moral compass from veering off the path of quality care.

  16. UNIX code management and distribution

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process

  17. Language Recognition via Sparse Coding

    2016-09-08

    explanation is that sparse coding can achieve a near-optimal approximation of much complicated nonlinear relationship through local and piecewise linear...training examples, where x(i) ∈ RN is the ith example in the batch. Optionally, X can be normalized and whitened before sparse coding for better result...normalized input vectors are then ZCA- whitened [20]. Em- pirically, we choose ZCA- whitening over PCA- whitening , and there is no dimensionality reduction

  18. System Based Code: Principal Concept

    Yasuhide Asada; Masanori Tashimo; Masahiro Ueta

    2002-01-01

    This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)

  19. The RETRAN-03 computer code

    Paulsen, M.P.; McFadden, J.H.; Peterson, C.E.; McClure, J.A.; Gose, G.C.; Jensen, P.J.

    1991-01-01

    The RETRAN-03 code development effort is designed to overcome the major theoretical and practical limitations associated with the RETRAN-02 computer code. The major objectives of the development program are to extend the range of analyses that can be performed with RETRAN, to make the code more dependable and faster running, and to have a more transportable code. The first two objectives are accomplished by developing new models and adding other models to the RETRAN-02 base code. The major model additions for RETRAN-03 are as follows: implicit solution methods for the steady-state and transient forms of the field equations; additional options for the velocity difference equation; a new steady-state initialization option for computer low-power steam generator initial conditions; models for nonequilibrium thermodynamic conditions; and several special-purpose models. The source code and the environmental library for RETRAN-03 are written in standard FORTRAN 77, which allows the last objective to be fulfilled. Some models in RETRAN-02 have been deleted in RETRAN-03. In this paper the changes between RETRAN-02 and RETRAN-03 are reviewed

  20. Fuel performance analysis code 'FAIR'

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.; Mahajan, S.C.; Kakodkar, A.

    1994-01-01

    For modelling nuclear reactor fuel rod behaviour of water cooled reactors under severe power maneuvering and high burnups, a mechanistic fuel performance analysis code FAIR has been developed. The code incorporates finite element based thermomechanical module, physically based fission gas release module and relevant models for modelling fuel related phenomena, such as, pellet cracking, densification and swelling, radial flux redistribution across the pellet due to the build up of plutonium near the pellet surface, pellet clad mechanical interaction/stress corrosion cracking (PCMI/SSC) failure of sheath etc. The code follows the established principles of fuel rod analysis programmes, such as coupling of thermal and mechanical solutions along with the fission gas release calculations, analysing different axial segments of fuel rod simultaneously, providing means for performing local analysis such as clad ridging analysis etc. The modular nature of the code offers flexibility in affecting modifications easily to the code for modelling MOX fuels and thorium based fuels. For performing analysis of fuel rods subjected to very long power histories within a reasonable amount of time, the code has been parallelised and is commissioned on the ANUPAM parallel processing system developed at Bhabha Atomic Research Centre (BARC). (author). 37 refs

  1. Tribal Green Building Administrative Code Example

    This Tribal Green Building Administrative Code Example can be used as a template for technical code selection (i.e., building, electrical, plumbing, etc.) to be adopted as a comprehensive building code.

  2. NOAA Weather Radio - EAS Event Codes

    Non-Zero All Hazards Logo Emergency Alert Description Event Codes Fact Sheet FAQ Organization Search Coding Using SAME SAME Non-Zero Codes DOCUMENTS NWR Poster NWR Brochure NWR Brochure Printing Notes

  3. Geographic data: Zip Codes (Shape File)

    Montgomery County of Maryland — This dataset contains all zip codes in Montgomery County. Zip codes are the postal delivery areas defined by USPS. Zip codes with mailboxes only are not included. As...

  4. Serial-data correlator/code translator

    Morgan, L. E.

    1977-01-01

    System, consisting of sampling flip flop, memory (either RAM or ROM), and memory buffer, correlates sampled data with predetermined acceptance code patterns, translates acceptable code patterns to nonreturn-to-zero code, and identifies data dropouts.

  5. KWIC Index of nuclear codes (1975 edition)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-01-01

    It is a KWIC Index for 254 nuclear codes in the Nuclear Code Abstracts (1975 edition). The classification of nuclear codes and the form of index are the same as those in the Computer Programme Library at Ispra, Italy. (auth.)

  6. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  7. CITOPP, CITMOD, CITWI, Processing codes for CITATION Code

    Albarhoum, M.

    2008-01-01

    Description of program or function: CITOPP processes the output file of the CITATION 3-D diffusion code. The program can plot axial, radial and circumferential flux distributions (in cylindrical geometry) in addition to the multiplication factor convergence. The flux distributions can be drawn for each group specified by the program and visualized on the screen. CITMOD processes both the output and the input files of the CITATION 3-D diffusion code. CITMOD can visualize both axial, and radial-angular models of the reactor described by CITATION input/output files. CITWI processes the input file (CIT.INP) of CITATION 3-D diffusion code. CIT.INP is processed to deduce the dimensions of the cell whose cross sections can be representative of the homonym reactor component in section 008 of CIT.INP

  8. Coding in Stroke and Other Cerebrovascular Diseases.

    Korb, Pearce J; Jones, William

    2017-02-01

    Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.

  9. Quasi-cyclic unit memory convolutional codes

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  10. System Design Description for the TMAD Code

    Finfrock, S.H.

    1995-01-01

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System

  11. On Code Parameters and Coding Vector Representation for Practical RLNC

    Heide, Janus; Pedersen, Morten Videbæk; Fitzek, Frank

    2011-01-01

    RLNC provides a theoretically efficient method for coding. The drawbacks associated with it are the complexity of the decoding and the overhead resulting from the encoding vector. Increasing the field size and generation size presents a fundamental trade-off between packet-based throughput...... to higher energy consumption. Therefore, the optimal trade-off is system and topology dependent, as it depends on the cost in energy of performing coding operations versus transmitting data. We show that moderate field sizes are the correct choice when trade-offs are considered. The results show that sparse...

  12. International assessment of PCA codes

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE

  13. The WIMS familly of codes

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  14. Translation of ARAC computer codes

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  15. Comparison of sodium aerosol codes

    Dunbar, I.H.; Fermandjian, J.; Bunz, H.; L'homme, A.; Lhiaubet, G.; Himeno, Y.; Kirby, C.R.; Mitsutsuka, N.

    1984-01-01

    Although hypothetical fast reactor accidents leading to severe core damage are very low probability events, their consequences are to be assessed. During such accidents, one can envisage the ejection of sodium, mixed with fuel and fission products, from the primary circuit into the secondary containment. Aerosols can be formed either by mechanical dispersion of the molten material or as a result of combustion of the sodium in the mixture. Therefore considerable effort has been devoted to study the different sodium aerosol phenomena. To ensure that the problems of describing the physical behaviour of sodium aerosols were adequately understood, a comparison of the codes being developed to describe their behaviour was undertaken. The comparison consists of two parts. The first is a comparative study of the computer codes used to predict aerosol behaviour during a hypothetical accident. It is a critical review of documentation available. The second part is an exercise in which code users have run their own codes with a pre-arranged input. For the critical comparative review of the computer models, documentation has been made available on the following codes: AEROSIM (UK), MAEROS (USA), HAARM-3 (USA), AEROSOLS/A2 (France), AEROSOLS/B1 (France), and PARDISEKO-IIIb (FRG)

  16. Repetition code of 15 qubits

    Wootton, James R.; Loss, Daniel

    2018-05-01

    The repetition code is an important primitive for the techniques of quantum error correction. Here we implement repetition codes of at most 15 qubits on the 16 qubit ibmqx3 device. Each experiment is run for a single round of syndrome measurements, achieved using the standard quantum technique of using ancilla qubits and controlled operations. The size of the final syndrome is small enough to allow for lookup table decoding using experimentally obtained data. The results show strong evidence that the logical error rate decays exponentially with code distance, as is expected and required for the development of fault-tolerant quantum computers. The results also give insight into the nature of noise in the device.

  17. Halftone Coding with JBIG2

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  18. List Decoding of Algebraic Codes

    Nielsen, Johan Sebastian Rosenkilde

    We investigate three paradigms for polynomial-time decoding of Reed–Solomon codes beyond half the minimum distance: the Guruswami–Sudan algorithm, Power decoding and the Wu algorithm. The main results concern shaping the computational core of all three methods to a problem solvable by module...... Hermitian codes using Guruswami–Sudan or Power decoding faster than previously known, and we show how to Wu list decode binary Goppa codes....... to solve such using module minimisation, or using our new Demand–Driven algorithm which is also based on module minimisation. The decoding paradigms are all derived and analysed in a self-contained manner, often in new ways or examined in greater depth than previously. Among a number of new results, we...

  19. SASSYS LMFBR systems analysis code

    Dunn, F.E.; Prohammer, F.G.

    1982-01-01

    The SASSYS code provides detailed steady-state and transient thermal-hydraulic analyses of the reactor core, inlet and outlet coolant plenums, primary and intermediate heat-removal systems, steam generators, and emergency shut-down heat removal systems in liquid-metal-cooled fast-breeder reactors (LMFBRs). The main purpose of the code is to analyze the consequences of failures in the shut-down heat-removal system and to determine whether this system can perform its mission adequately even with some of its components inoperable. The code is not plant-specific. It is intended for use with any LMFBR, using either a loop or a pool design, a once-through steam generator or an evaporator-superheater combination, and either a homogeneous core or a heterogeneous core with internal-blanket assemblies

  20. ASME Code Efforts Supporting HTGRs

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  1. Allele coding in genomic evaluation

    Standen, Ismo; Christensen, Ole Fredslund

    2011-01-01

    Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker...... effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous...... genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call...

  2. Development of the DTNTES code

    Ortega Prieto, P.; Morales Dorado, M.D.; Alonso Santos, A.

    1987-01-01

    The DTNTES code has been developed in the Department of Nuclear Technology of the Polytechnical University in Madrid as a part of the Research Program on Quantitative Risk Analysis. DTNTES code calculates several time-dependent probabilistic characteristics of basic events, minimal cut sets and the top event of a fault tree. The code assumes that basic events are statistically independent, and they have failure and repair distributions. It computes the minimal cut upper bound approach for the top event unavailability, and the time-dependent unreliability of the top event by means of different methods, selected by the user. These methods are: expected number of system failures, failure rate, Barlow-Proschan bound, steady-state upper bound, and T* method. (author)

  3. The histone codes for meiosis.

    Wang, Lina; Xu, Zhiliang; Khawar, Muhammad Babar; Liu, Chao; Li, Wei

    2017-09-01

    Meiosis is a specialized process that produces haploid gametes from diploid cells by a single round of DNA replication followed by two successive cell divisions. It contains many special events, such as programmed DNA double-strand break (DSB) formation, homologous recombination, crossover formation and resolution. These events are associated with dynamically regulated chromosomal structures, the dynamic transcriptional regulation and chromatin remodeling are mainly modulated by histone modifications, termed 'histone codes'. The purpose of this review is to summarize the histone codes that are required for meiosis during spermatogenesis and oogenesis, involving meiosis resumption, meiotic asymmetric division and other cellular processes. We not only systematically review the functional roles of histone codes in meiosis but also discuss future trends and perspectives in this field. © 2017 Society for Reproduction and Fertility.

  4. MAGNETOHYDRODYNAMIC EQUATIONS (MHD GENERATION CODE

    Francisco Frutos Alfaro

    2017-04-01

    Full Text Available A program to generate codes in Fortran and C of the full magnetohydrodynamic equations is shown. The program uses the free computer algebra system software REDUCE. This software has a package called EXCALC, which is an exterior calculus program. The advantage of this program is that it can be modified to include another complex metric or spacetime. The output of this program is modified by means of a LINUX script which creates a new REDUCE program to manipulate the magnetohydrodynamic equations to obtain a code that can be used as a seed for a magnetohydrodynamic code for numerical applications. As an example, we present part of the output of our programs for Cartesian coordinates and how to do the discretization.

  5. ASME Code Efforts Supporting HTGRs

    D.K. Morton

    2011-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  6. Distributed Video Coding: Iterative Improvements

    Luong, Huynh Van

    Nowadays, emerging applications such as wireless visual sensor networks and wireless video surveillance are requiring lightweight video encoding with high coding efficiency and error-resilience. Distributed Video Coding (DVC) is a new coding paradigm which exploits the source statistics...... and noise modeling and also learn from the previous decoded Wyner-Ziv (WZ) frames, side information and noise learning (SING) is proposed. The SING scheme introduces an optical flow technique to compensate the weaknesses of the block based SI generation and also utilizes clustering of DCT blocks to capture...... cross band correlation and increase local adaptivity in noise modeling. During decoding, the updated information is used to iteratively reestimate the motion and reconstruction in the proposed motion and reconstruction reestimation (MORE) scheme. The MORE scheme not only reestimates the motion vectors...

  7. Signal Constellations for Multilevel Coded Modulation with Sparse Graph Codes

    Cronie, H.S.

    2005-01-01

    A method to combine error-correction coding and spectral efficient modulation for transmission over channels with Gaussian noise is presented. The method of modulation leads to a signal constellation in which the constellation symbols have a nonuniform distribution. This gives a so-called shape gain

  8. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  9. Network Coding Over The 232

    Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter

    2013-01-01

    Creating efficient finite field implementations has been an active research topic for several decades. Many appli- cations in areas such as cryptography, signal processing, erasure coding and now also network coding depend on this research to deliver satisfactory performance. In this paper we...... from a benchmark application written in C++. These results are finally compared to different binary and binary extension field implementations. The results show that the prime field implementation offers a large field size while maintaining a very good performance. We believe that using prime fields...

  10. Network Coded Software Defined Networking

    Krigslund, Jeppe; Hansen, Jonas; Roetter, Daniel Enrique Lucani

    2015-01-01

    Software Defined Networking (SDN) and Network Coding (NC) are two key concepts in networking that have garnered a large attention in recent years. On the one hand, SDN's potential to virtualize services in the Internet allows a large flexibility not only for routing data, but also to manage....... This paper advocates for the use of SDN to bring about future Internet and 5G network services by incorporating network coding (NC) functionalities. The inherent flexibility of both SDN and NC provides a fertile ground to envision more efficient, robust, and secure networking designs, that may also...

  11. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  12. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  13. Optimal, Reliability-Based Code Calibration

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  14. 21 CFR 106.90 - Coding.

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged in...

  15. A Dual Coding View of Vocabulary Learning

    Sadoski, Mark

    2005-01-01

    A theoretical perspective on acquiring sight vocabulary and developing meaningful vocabulary is presented. Dual Coding Theory assumes that cognition occurs in two independent but connected codes: a verbal code for language and a nonverbal code for mental imagery. The mixed research literature on using pictures in teaching sight vocabulary is…

  16. Channel coding techniques for wireless communications

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  17. Pump Component Model in SPACE Code

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  18. Development of MCNP interface code in HFETR

    Qiu Liqing; Fu Rong; Deng Caiyu

    2007-01-01

    In order to describe the HFETR core with MCNP method, the interface code MCNPIP for HFETR and MCNP code is developed. This paper introduces the core DXSY and flowchart of MCNPIP code, and the handling of compositions of fuel elements and requirements on hardware and software. Finally, MCNPIP code is validated against the practical application. (authors)

  19. Entropy, Coding and Data Compression

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...

  20. Visual communication with retinex coding.

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  1. International building code for bamboo

    Janssen, J.J.A.; Kumar, Arun; Ramanuja Rao, I.V.; Sastry, Cherla

    2002-01-01

    One of the recommendations in the International Bamboo Congress and Workshop, held at Bali in 1995, requested the International Network for Bamboo and Rattan (INBAR), "to organize a task force to discuss and finalize a building code for bamboo". Consequently a draft was prepared under the title, "An

  2. Coding as literacy metalithikum IV

    Bühlmann, Vera; Moosavi, Vahid

    2015-01-01

    Recent developments in computer science, particularly "data-driven procedures" have opened a new level of design and engineering. This has also affected architecture. The publication collects contributions on Coding as Literacy by computer scientists, mathematicians, philosophers, cultural theorists, and architects. "Self-Organizing Maps" (SOM) will serve as the concrete reference point for all further discussions.

  3. Dual Coding and Bilingual Memory.

    Paivio, Allan; Lambert, Wallace

    1981-01-01

    Describes study which tested a dual coding approach to bilingual memory using tasks that permit comparison of the effects of bilingual encoding with verbal-nonverbal dual encoding items. Results provide strong support for a version of the independent or separate stories view of bilingual memory. (Author/BK)

  4. Tri-Coding of Information.

    Simpson, Timothy J.

    Paivio's Dual Coding Theory has received widespread recognition for its connection between visual and aural channels of internal information processing. The use of only two channels, however, cannot satisfactorily explain the effects witnessed every day. This paper presents a study suggesting the presence a third, kinesthetic channel, currently…

  5. CINETHICA - Core accident analysis code

    Nakata, H.

    1989-10-01

    A computer program for nuclear accident analysis has been developed based on the point-kinetics approximation and one-dimensional heat transfer model for reactivity feedback calculation. Hansen's method/1/ were used for the kinetics equation solution and explicit Euler method were adopted for the thermohidraulic equations. The results were favorably compared to those from the GAPOTKIN Code/2/. (author) [pt

  6. The VEGA Assembly Spectrum Code

    Milosevic, M.

    1997-01-01

    The VEGA is assembly spectrum code, developed as a design tool for producing a few-group averaged cross section data for a wide range of reactor types including both thermal and fast reactors. It belongs to a class of codes, which may be characterized by the separate stages for micro group, spectrum and macro group assembly calculations. The theoretical foundation for the development of the VEGA code was integral transport theory in the first-flight collision probability formulation. Two versions of VEGA are now in use, VEGA-1 established on standard equivalence theory and VEGA-2 based on new subgroup method applicable for any geometry for which a flux solution is possible. This paper describes a features which are unique to the VEGA codes with concentration on the basic principles and algorithms used in the proposed subgroup method. Presented validation of this method, comprise the results for a homogenous uranium-plutonium mixture and a PWR cell containing a recycled uranium-plutonium oxide. Example application for a realistic fuel dissolver benchmark problem , which was extensive analyzed in the international calculations, is also included. (author)

  7. X-ray image coding

    1974-01-01

    The invention aims at decreasing the effect of stray radiation in X-ray images. This is achieved by putting a plate between source and object with parallel zones of alternating high and low absorption coefficients for X-radiation. The image is scanned with the help of electronic circuits which decode the signal space coded by the plate, thus removing the stray radiation

  8. Smells in software test code

    Garousi, Vahid; Küçük, Barış

    2018-01-01

    As a type of anti-pattern, test smells are defined as poorly designed tests and their presence may negatively affect the quality of test suites and production code. Test smells are the subject of active discussions among practitioners and researchers, and various guidelines to handle smells are

  9. The fuel performance code future

    Ronchi, C.; Van de Laar, J.

    1988-01-01

    The paper describes the LWR version of the fuel performance code FUTURE, which was recently developed to calculate the fuel response (swelling, cladding deformation, release) to reactor transient conditions, starting from a broad-based description of the processes of major concern. The main physical models assumed are presented together with the scheme of the computer program

  10. MAD parsing and conversion code

    Mokhov, Dmitri N.

    2000-01-01

    The authors describe design and implementation issues while developing an embeddable MAD language parser. Two working applications of the parser are also described, namely, MAD-> C++ converter and C++ factory. The report contains some relevant details about the parser and examples of converted code. It also describes some of the problems that were encountered and the solutions found for them

  11. Visual Communication with Retinex Coding

    Huck, Friedrich O.; Fales, Carl L.; Davis, Richard E.; Alter-Gartenberg, Rachel

    2000-04-01

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth.

  12. The spammed code offset method

    Skoric, B.; Vreede, de N.

    2013-01-01

    Helper data schemes are a security primitive used for privacy-preserving biometric databases and Physical Unclonable Functions. One of the oldest known helper data schemes is the Code Offset Method (COM). We propose an extension of the COM: the helper data is accompanied by many instances of fake

  13. Static Verification for Code Contracts

    Fähndrich, Manuel

    The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.

  14. CVSscan : Visualization of Code Evolution

    Voinea, Lucian; Telea, Alex; Wijk, Jarke J. van

    2005-01-01

    During the life cycle of a software system, the source code is changed many times. We study how developers can be enabled to get insight in these changes, in order to understand the status, history and structure better, as well as for instance the roles played by various contributors. We present

  15. Coding and English Language Teaching

    Stevens, Vance; Verschoor, Jennifer

    2017-01-01

    According to Dudeney, Hockly, and Pegrum (2013) coding is a deeper skill subsumed under the four main digital literacies of language, connections, information, and (re)design. Coders or programmers are people who write the programmes behind everything we see and do on a computer. Most students spend several hours playing online games, but few know…

  16. The spammed code offset method

    Skoric, B.; Vreede, de N.

    2014-01-01

    Helper data schemes are a security primitive used for privacy-preserving biometric databases and physical unclonable functions. One of the oldest known helper data schemes is the code offset method (COM). We propose an extension of the COM: the helper data are accompanied by many instances of fake

  17. SSI and the Environmental Code

    Loefgren, T.

    1997-12-01

    Radiation is, to some extent, included in the environmental code being prepared by the government. As a consequence both the Radiation Protection Institute and the proposed Environmental Court may set legal conditions concerning radiation protection for the proponent. Legal and other matters related to this issue are discussed in the report

  18. Electron transport code theoretical basis

    Dubi, A.; Horowitz, Y.S.

    1978-04-01

    This report mainly describes the physical and mathematical considerations involved in the treatment of the multiple collision processes. A brief description is given of the traditional methods used in electron transport via Monte Carlo, and a somewhat more detailed description, of the approach to be used in the presently developed code

  19. Computation of the Genetic Code

    Kozlov, Nicolay N.; Kozlova, Olga N.

    2018-03-01

    One of the problems in the development of mathematical theory of the genetic code (summary is presented in [1], the detailed -to [2]) is the problem of the calculation of the genetic code. Similar problems in the world is unknown and could be delivered only in the 21st century. One approach to solving this problem is devoted to this work. For the first time provides a detailed description of the method of calculation of the genetic code, the idea of which was first published earlier [3]), and the choice of one of the most important sets for the calculation was based on an article [4]. Such a set of amino acid corresponds to a complete set of representations of the plurality of overlapping triple gene belonging to the same DNA strand. A separate issue was the initial point, triggering an iterative search process all codes submitted by the initial data. Mathematical analysis has shown that the said set contains some ambiguities, which have been founded because of our proposed compressed representation of the set. As a result, the developed method of calculation was limited to the two main stages of research, where the first stage only the of the area were used in the calculations. The proposed approach will significantly reduce the amount of computations at each step in this complex discrete structure.

  20. QR Codes: Taking Collections Further

    Ahearn, Caitlin

    2014-01-01

    With some thought and direction, QR (quick response) codes are a great tool to use in school libraries to enhance access to information. From March through April 2013, Caitlin Ahearn interned at Sanborn Regional High School (SRHS) under the supervision of Pam Harland. As a result of Harland's un-Deweying of the nonfiction collection at SRHS,…

  1. Surgical navigation with QR codes

    Katanacho Manuel

    2016-09-01

    Full Text Available The presented work is an alternative to established measurement systems in surgical navigation. The system is based on camera based tracking of QR code markers. The application uses a single video camera, integrated in a surgical lamp, that captures the QR markers attached to surgical instruments and to the patient.

  2. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  3. Fundamentals of information theory and coding design

    Togneri, Roberto

    2003-01-01

    In a clear, concise, and modular format, this book introduces the fundamental concepts and mathematics of information and coding theory. The authors emphasize how a code is designed and discuss the main properties and characteristics of different coding algorithms along with strategies for selecting the appropriate codes to meet specific requirements. They provide comprehensive coverage of source and channel coding, address arithmetic, BCH, and Reed-Solomon codes and explore some more advanced topics such as PPM compression and turbo codes. Worked examples and sets of basic and advanced exercises in each chapter reinforce the text's clear explanations of all concepts and methodologies.

  4. Software information sorting code 'PLUTO-R'

    Tsunematsu, Toshihide; Naraoka, Kenitsu; Adachi, Masao; Takeda, Tatsuoki

    1984-10-01

    A software information sorting code PLUTO-R is developed as one of the supporting codes of the TRITON system for the fusion plasma analysis. The objective of the PLUTO-R code is to sort reference materials of the codes in the TRITON code system. The easiness in the registration of information is especially pursued. As experience and skill in the data registration are not required, this code is usable for construction of general small-scale information system. This report gives an overall description and the user's manual of the PLUTO-R code. (author)

  5. Explicit MDS Codes with Complementary Duals

    Beelen, Duals Peter; Jin, Lingfei

    2018-01-01

    In 1964, Massey introduced a class of codes with complementary duals which are called Linear Complimentary Dual (LCD for short) codes. He showed that LCD codes have applications in communication system, side-channel attack (SCA) and so on. LCD codes have been extensively studied in literature....... On the other hand, MDS codes form an optimal family of classical codes which have wide applications in both theory and practice. The main purpose of this paper is to give an explicit construction of several classes of LCD MDS codes, using tools from algebraic function fields. We exemplify this construction...

  6. A class of Sudan-decodable codes

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  7. Development and validation of sodium fire codes

    Morii, Tadashi; Himeno Yoshiaki; Miyake, Osamu

    1989-01-01

    Development, verification, and validation of the spray fire code, SPRAY-3M, the pool fire codes, SOFIRE-M2 and SPM, the aerosol behavior code, ABC-INTG, and the simultaneous spray and pool fires code, ASSCOPS, are presented. In addition, the state-of-the-art of development of the multi-dimensional natural convection code, SOLFAS, for the analysis of heat-mass transfer during a fire, is presented. (author)

  8. Studi Kompresi Data dengan Metode Arithmetic Coding

    Santoso, Petrus

    2001-01-01

    In Bahasa Indonesia : Ada banyak sekali metode kompresi data yang ada saat ini. Sebagian besar metode tersebut bisa dikelompokkan ke dalam salah satu dari dua kelompok besar, statistical based dan dictionary based. Contoh dari dictionary based coding adalah Lempel Ziv Welch dan contoh dari statistical based coding adalah Huffman Coding dan Arithmetic Coding yang merupakan algoritma terbaru. Makalah ini mengulas prinsip-prinsip dari Arithmetic Coding serta keuntungan-keuntungannya dibandi...

  9. Computer codes for ventilation in nuclear facilities

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  10. User's manual for a measurement simulation code

    Kern, E.A.

    1982-07-01

    The MEASIM code has been developed primarily for modeling process measurements in materials processing facilities associated with the nuclear fuel cycle. In addition, the code computes materials balances and the summation of materials balances along with associated variances. The code has been used primarily in performance assessment of materials' accounting systems. This report provides the necessary information for a potential user to employ the code in these applications. A number of examples that demonstrate most of the capabilities of the code are provided

  11. Light-water reactor safety analysis codes

    Jackson, J.F.; Ransom, V.H.; Ybarrondo, L.J.; Liles, D.R.

    1980-01-01

    A brief review of the evolution of light-water reactor safety analysis codes is presented. Included is a summary comparison of the technical capabilities of major system codes. Three recent codes are described in more detail to serve as examples of currently used techniques. Example comparisons between calculated results using these codes and experimental data are given. Finally, a brief evaluation of current code capability and future development trends is presented

  12. Coding In-depth Semistructured Interviews

    Campbell, John L.; Quincy, Charles; Osserman, Jordan

    2013-01-01

    Many social science studies are based on coded in-depth semistructured interview transcripts. But researchers rarely report or discuss coding reliability in this work. Nor is there much literature on the subject for this type of data. This article presents a procedure for developing coding schemes...... useful for situations where a single knowledgeable coder will code all the transcripts once the coding scheme has been established. This approach can also be used with other types of qualitative data and in other circumstances....

  13. Towers of generalized divisible quantum codes

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  14. Coding and transmission of subband coded images on the Internet

    Wah, Benjamin W.; Su, Xiao

    2001-09-01

    Subband-coded images can be transmitted in the Internet using either the TCP or the UDP protocol. Delivery by TCP gives superior decoding quality but with very long delays when the network is unreliable, whereas delivery by UDP has negligible delays but with degraded quality when packets are lost. Although images are delivered currently over the Internet by TCP, we study in this paper the use of UDP to deliver multi-description reconstruction-based subband-coded images. First, in order to facilitate recovery from UDP packet losses, we propose a joint sender-receiver approach for designing optimized reconstruction-based subband transform (ORB-ST) in multi-description coding (MDC). Second, we carefully evaluate the delay-quality trade-offs between the TCP delivery of SDC images and the UDP and combined TCP/UDP delivery of MDC images. Experimental results show that our proposed ORB-ST performs well in real Internet tests, and UDP and combined TCP/UDP delivery of MDC images provide a range of attractive alternatives to TCP delivery.

  15. Genetic coding and gene expression - new Quadruplet genetic coding model

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  16. Amino acid codes in mitochondria as possible clues to primitive codes

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  17. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  18. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  19. CBP Phase I Code Integration

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-01-01

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown and Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown and Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface

  20. CTCN: Colloid transport code -- nuclear

    Jain, R.

    1993-01-01

    This report describes the CTCN computer code, designed to solve the equations of transient colloidal transport of radionuclides in porous and fractured media. This Fortran 77 package solves systems of coupled nonlinear differential-algebraic equations with a wide range of boundary conditions. The package uses the Method of Lines technique with a special section which forms finite-difference discretizations in up to four spatial dimensions to automatically convert the system into a set of ordinary differential equations. The CTCN code then solves these equations using a robust, efficient ODE solver. Thus CTCN can be used to solve population balance equations along with the usual transport equations to model colloid transport processes or as a general problem solver to treat up to four-dimensional differential-algebraic systems

  1. Code for Internal Dosimetry (CINDY)

    Strenge, D.L.; Peloquin, R.A.; Sula, M.J.; Johnson, J.R.

    1990-10-01

    The CINDY (Code for Internal Dosimetry) Software Package has been developed by Pacific Northwest Laboratory to address the Department of Energy (DOE) Order 5480.11 by providing the capabilities to calculate organ dose equivalents and effective dose equivalents using the approach of International Commission on Radiological Protection (ICRP) 30. The code assist in the interpretation of bioassay data, evaluates committed and calendar-year doses from intake or bioassay measurement data, provides output consistent with revised DOE orders, is easy to use, and is generally applicable to DOE sites. Flexible biokinetics models are used to determine organ doses for annual, 50-year, calendar-year, or any other time-point dose necessary for chronic or acute intakes. CINDY is an interactive program that prompts the user to describe the cases to be analyzed and calculates the necessary results for the type of analysis being performed. Four types of analyses may be specified. 92 figs., 10 tabs

  2. Continuous Non-malleable Codes

    Faust, Sebastian; Mukherjee, Pratyay; Nielsen, Jesper Buus

    2014-01-01

    or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion - so-called continuous non-malleability - where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non...... is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. We construct a split-state code satisfying continuous non-malleability. Our scheme...... is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous...

  3. Computer access security code system

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  4. The APOLLO assembly spectrum code

    Kavenoky, A.; Sanchez, R.

    1987-04-01

    The APOLLO code was originally developed as a design tool for HTR's, later it was aimed at the calculation of PWR lattices. APOLLO is a general purpose assembly spectrum code based on the multigroup integral transport equation; refined collision probability modules allow the computation of 1D geometries with linearly anisotropic scattering and two term flux expansion. In 2D geometries modules based on the substructure method provide fast and accurate design calculations and a module based on a direct discretization is devoted to reference calculations. The SPH homogenization technique provides corrected cross sections performing an equivalence between coarse and refined calculations. The post processing module of APOLLO generate either APOLLIB to be used by APOLLO or NEPLIB for reactor diffusion calculation. The cross section library of APOLLO contains data and self-shielding data for more than 400 isotopes. APOLLO is able to compute the depletion of any medium accounting for any heavy isotope or fission product chain. 21 refs

  5. Do you write secure code?

    Computer Security Team

    2011-01-01

    At CERN, we are excellent at producing software, such as complex analysis jobs, sophisticated control programs, extensive monitoring tools, interactive web applications, etc. This software is usually highly functional, and fulfils the needs and requirements as defined by its author. However, due to time constraints or unintentional ignorance, security aspects are often neglected. Subsequently, it was even more embarrassing for the author to find out that his code flawed and was used to break into CERN computers, web pages or to steal data…   Thus, if you have the pleasure or task of producing software applications, take some time before and familiarize yourself with good programming practices. They should not only prevent basic security flaws in your code, but also improve its readability, maintainability and efficiency. Basic rules for good programming, as well as essential books on proper software development, can be found in the section for software developers on our security we...

  6. General Monte Carlo code MONK

    Moore, J.G.

    1974-01-01

    The Monte Carlo code MONK is a general program written to provide a high degree of flexibility to the user. MONK is distinguished by its detailed representation of nuclear data in point form i.e., the cross-section is tabulated at specific energies instead of the more usual group representation. The nuclear data are unadjusted in the point form but recently the code has been modified to accept adjusted group data as used in fast and thermal reactor applications. The various geometrical handling capabilities and importance sampling techniques are described. In addition to the nuclear data aspects, the following features are also described; geometrical handling routines, tracking cycles, neutron source and output facilities. 12 references. (U.S.)

  7. The development of code benchmarks

    Glass, R.E.

    1986-01-01

    Sandia National Laboratories has undertaken a code benchmarking effort to define a series of cask-like problems having both numerical solutions and experimental data. The development of the benchmarks includes: (1) model problem definition, (2) code intercomparison, and (3) experimental verification. The first two steps are complete and a series of experiments are planned. The experiments will examine the elastic/plastic behavior of cylinders for both the end and side impacts resulting from a nine meter drop. The cylinders will be made from stainless steel and aluminum to give a range of plastic deformations. This paper presents the results of analyses simulating the model's behavior using materials properties for stainless steel and aluminum

  8. The EGS5 Code System

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  9. Microgravity computing codes. User's guide

    1982-01-01

    Codes used in microgravity experiments to compute fluid parameters and to obtain data graphically are introduced. The computer programs are stored on two diskettes, compatible with the floppy disk drives of the Apple 2. Two versions of both disks are available (DOS-2 and DOS-3). The codes are written in BASIC and are structured as interactive programs. Interaction takes place through the keyboard of any Apple 2-48K standard system with single floppy disk drive. The programs are protected against wrong commands given by the operator. The programs are described step by step in the same order as the instructions displayed on the monitor. Most of these instructions are shown, with samples of computation and of graphics.

  10. Hello Ruby adventures in coding

    Liukas, Linda

    2015-01-01

    "Code is the 21st century literacy and the need for people to speak the ABCs of Programming is imminent." --Linda Liukas Meet Ruby--a small girl with a huge imagination. In Ruby's world anything is possible if you put your mind to it. When her dad asks her to find five hidden gems Ruby is determined to solve the puzzle with the help of her new friends, including the Wise Snow Leopard, the Friendly Foxes, and the Messy Robots. As Ruby stomps around her world kids will be introduced to the basic concepts behind coding and programming through storytelling. Learn how to break big problems into small problems, repeat tasks, look for patterns, create step-by-step plans, and think outside the box. With hands-on activities included in every chapter, future coders will be thrilled to put their own imaginations to work.

  11. CBP PHASE I CODE INTEGRATION

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  12. An axisymmetric gravitational collapse code

    Choptuik, Matthew W [CIAR Cosmology and Gravity Program, Department of Physics and Astronomy, University of British Columbia, Vancouver BC, V6T 1Z1 (Canada); Hirschmann, Eric W [Department of Physics and Astronomy, Brigham Young University, Provo, UT 84604 (United States); Liebling, Steven L [Southampton College, Long Island University, Southampton, NY 11968 (United States); Pretorius, Frans [Theoretical Astrophysics, California Institute of Technology, Pasadena, CA 91125 (United States)

    2003-05-07

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations.

  13. An axisymmetric gravitational collapse code

    Choptuik, Matthew W; Hirschmann, Eric W; Liebling, Steven L; Pretorius, Frans

    2003-01-01

    We present a new numerical code designed to solve the Einstein field equations for axisymmetric spacetimes. The long-term goal of this project is to construct a code that will be capable of studying many problems of interest in axisymmetry, including gravitational collapse, critical phenomena, investigations of cosmic censorship and head-on black-hole collisions. Our objective here is to detail the (2+1)+1 formalism we use to arrive at the corresponding system of equations and the numerical methods we use to solve them. We are able to obtain stable evolution, despite the singular nature of the coordinate system on the axis, by enforcing appropriate regularity conditions on all variables and by adding numerical dissipation to hyperbolic equations

  14. Dopamine reward prediction error coding.

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  15. Predictive coding in Agency Detection

    Andersen, Marc Malmdorf

    2017-01-01

    Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...... promising new framework for understanding perception and action, may solve pending theoretical inconsistencies in agency detection research, account for the puzzling experimental findings mentioned above, and provide hypotheses for future experimental testing. Predictive coding explains how the brain......, unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...

  16. Forms and Linear Network Codes

    Hansen, Johan P.

    We present a general theory to obtain linear network codes utilizing forms and obtain explicit families of equidimensional vector spaces, in which any pair of distinct vector spaces intersect in the same small dimension. The theory is inspired by the methods of the author utilizing the osculating...... spaces of Veronese varieties. Linear network coding transmits information in terms of a basis of a vector space and the information is received as a basis of a possibly altered vector space. Ralf Koetter and Frank R. Kschischang introduced a metric on the set af vector spaces and showed that a minimal...... distance decoder for this metric achieves correct decoding if the dimension of the intersection of the transmitted and received vector space is sufficiently large. The vector spaces in our construction are equidistant in the above metric and the distance between any pair of vector spaces is large making...

  17. Clean Code - Why you should care

    CERN. Geneva

    2015-01-01

    - Martin Fowler Writing code is communication, not solely with the computer that executes it, but also with other developers and with oneself. A developer spends a lot of his working time reading and understanding code that was written by other developers or by himself in the past. The readability of the code plays an important factor for the time to find a bug or add new functionality, which in turn has a big impact on the productivity. Code that is difficult to undestand, hard to maintain and refactor, and offers many spots for bugs to hide is not considered to be "clean code". But what could considered as "clean code" and what are the advantages of a strict application of its guidelines? In this presentation we will take a look on some typical "code smells" and proposed guidelines to improve your coding skills to write cleaner code that is less bug prone and better to maintain.

  18. Implementation of LT codes based on chaos

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  19. Gray Code for Cayley Permutations

    J.-L. Baril

    2003-10-01

    Full Text Available A length-n Cayley permutation p of a total ordered set S is a length-n sequence of elements from S, subject to the condition that if an element x appears in p then all elements y < x also appear in p . In this paper, we give a Gray code list for the set of length-n Cayley permutations. Two successive permutations in this list differ at most in two positions.

  20. The Accurate Particle Tracer Code

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...