WorldWideScience

Sample records for code input physics

  1. ETFOD: a point model physics code with arbitrary input

    Energy Technology Data Exchange (ETDEWEB)

    Rothe, K.E.; Attenberger, S.E.

    1980-06-01

    ETFOD is a zero-dimensional code which solves a set of physics equations by minimization. The technique used is different than normally used, in that the input is arbitrary. The user is supplied with a set of variables from which he specifies which variables are input (unchanging). The remaining variables become the output. Presently the code is being used for ETF reactor design studies. The code was written in a manner to allow easy modificaton of equations, variables, and physics calculations. The solution technique is presented along with hints for using the code.

  2. Physical Layer Network Coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Yomo, Hironori; Popovski, Petar

    2013-01-01

    Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause/receive inter......Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause...

  3. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  4. Automation of RELAP5 input calibration and code validation using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Phung, Viet-Anh, E-mail: vaphung@kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden); Vorobyev, Yury, E-mail: yura3510@gmail.com [National Research Center “Kurchatov Institute”, Kurchatov square 1, Moscow 123182 (Russian Federation); Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se [Division of Nuclear Power Safety, Royal Institute of Technology, Roslagstullsbacken 21, 10691 Stockholm (Sweden)

    2016-04-15

    Highlights: • Automated input calibration and code validation using genetic algorithm is presented. • Predictions generally overlap experiments for individual system response quantities (SRQs). • It was not possible to predict simultaneously experimental maximum flow rate and oscillation period. • Simultaneous consideration of multiple SRQs is important for code validation. - Abstract: Validation of system thermal-hydraulic codes is an important step in application of the codes to reactor safety analysis. The goal of the validation process is to determine how well a code can represent physical reality. This is achieved by comparing predicted and experimental system response quantities (SRQs) taking into account experimental and modelling uncertainties. Parameters which are required for the code input but not measured directly in the experiment can become an important source of uncertainty in the code validation process. Quantification of such parameters is often called input calibration. Calibration and uncertainty quantification may become challenging tasks when the number of calibrated input parameters and SRQs is large and dependencies between them are complex. If only engineering judgment is employed in the process, the outcome can be prone to so called “user effects”. The goal of this work is to develop an automated approach to input calibration and RELAP5 code validation against data on two-phase natural circulation flow instability. Multiple SRQs are used in both calibration and validation. In the input calibration, we used genetic algorithm (GA), a heuristic global optimization method, in order to minimize the discrepancy between experimental and simulation data by identifying optimal combinations of uncertain input parameters in the calibration process. We demonstrate the importance of the proper selection of SRQs and respective normalization and weighting factors in the fitness function. In the code validation, we used maximum flow rate as the

  5. ATHENA code manual. Volume 2. Input and data requirements

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation.

  6. User input verification and test driven development in the NJOY21 nuclear data processing code

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Conlin, Jeremy Lloyd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McCartney, Austin Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-21

    Before physically-meaningful data can be used in nuclear simulation codes, the data must be interpreted and manipulated by a nuclear data processing code so as to extract the relevant quantities (e.g. cross sections and angular distributions). Perhaps the most popular and widely-trusted of these processing codes is NJOY, which has been developed and improved over the course of 10 major releases since its creation at Los Alamos National Laboratory in the mid-1970’s. The current phase of NJOY development is the creation of NJOY21, which will be a vast improvement from its predecessor, NJOY2016. Designed to be fast, intuitive, accessible, and capable of handling both established and modern formats of nuclear data, NJOY21 will address many issues that many NJOY users face, while remaining functional for those who prefer the existing format. Although early in its development, NJOY21 is quickly providing input validation to check user input. By providing rapid and helpful responses to users while writing input files, NJOY21 will prove to be more intuitive and easy to use than any of its predecessors. Furthermore, during its development, NJOY21 is subject to regular testing, such that its test coverage must strictly increase with the addition of any production code. This thorough testing will allow developers and NJOY users to establish confidence in NJOY21 as it gains functionality. This document serves as a discussion regarding the current state input checking and testing practices of NJOY21.

  7. Flexible and re-configurable optical three-input XOR logic gate of phase-modulated signals with multicast functionality for potential application in optical physical-layer network coding.

    Science.gov (United States)

    Lu, Guo-Wei; Qin, Jun; Wang, Hongxiang; Ji, XuYuefeng; Sharif, Gazi Mohammad; Yamaguchi, Shigeru

    2016-02-08

    Optical logic gate, especially exclusive-or (XOR) gate, plays important role in accomplishing photonic computing and various network functionalities in future optical networks. On the other hand, optical multicast is another indispensable functionality to efficiently deliver information in optical networks. In this paper, for the first time, we propose and experimentally demonstrate a flexible optical three-input XOR gate scheme for multiple input phase-modulated signals with a 1-to-2 multicast functionality for each XOR operation using four-wave mixing (FWM) effect in single piece of highly-nonlinear fiber (HNLF). Through FWM in HNLF, all of the possible XOR operations among input signals could be simultaneously realized by sharing a single piece of HNLF. By selecting the obtained XOR components using a followed wavelength selective component, the number of XOR gates and the participant light in XOR operations could be flexibly configured. The re-configurability of the proposed XOR gate and the function integration of the optical logic gate and multicast in single device offer the flexibility in network design and improve the network efficiency. We experimentally demonstrate flexible 3-input XOR gate for four 10-Gbaud binary phase-shift keying signals with a multicast scale of 2. Error-free operations for the obtained XOR results are achieved. Potential application of the integrated XOR and multicast function in network coding is also discussed.

  8. HOTSPOT Health Physics codes for the PC

    Energy Technology Data Exchange (ETDEWEB)

    Homann, S.G.

    1994-03-01

    The HOTSPOT Health Physics codes were created to provide Health Physics personnel with a fast, field-portable calculation tool for evaluating accidents involving radioactive materials. HOTSPOT codes are a first-order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT codes produce a consistent output for the same input assumptions and minimize the probability of errors associated with reading a graph incorrectly or scaling a universal nomogram during an emergency. The HOTSPOT codes are designed for short-term (less than 24 hours) release durations. Users requiring radiological release consequences for release scenarios over a longer time period, e.g., annual windrose data, are directed to such long-term models as CAPP88-PC (Parks, 1992). Users requiring more sophisticated modeling capabilities, e.g., complex terrain; multi-location real-time wind field data; etc., are directed to such capabilities as the Department of Energy`s ARAC computer codes (Sullivan, 1993). Four general programs -- Plume, Explosion, Fire, and Resuspension -- calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel fire, or an area contamination event. Other programs deal with the release of plutonium, uranium, and tritium to expedite an initial assessment of accidents involving nuclear weapons. Additional programs estimate the dose commitment from the inhalation of any one of the radionuclides listed in the database of radionuclides; calibrate a radiation survey instrument for ground-survey measurements; and screen plutonium uptake in the lung (see FIDLER Calibration and LUNG Screening sections).

  9. Waste treatment in physical input-output analysis

    NARCIS (Netherlands)

    Dietzenbacher, E

    2005-01-01

    When compared to monetary input-output tables (MIOTs), a distinctive feature of physical input-output tables (PIOTs) is that they include the generation of waste as part of a consistent accounting framework. As a consequence, however, physical input-output analysis thus requires that the treatment

  10. Input coding for neuro-electronic hybrid systems.

    Science.gov (United States)

    George, Jude Baby; Abraham, Grace Mathew; Singh, Katyayani; Ankolekar, Shreya M; Amrutur, Bharadwaj; Sikdar, Sujit Kumar

    2014-12-01

    Liquid State Machines have been proposed as a framework to explore the computational properties of neuro-electronic hybrid systems (Maass et al., 2002). Here the neuronal culture implements a recurrent network and is followed by an array of linear discriminants implemented using perceptrons in electronics/software. Thus in this framework, it is desired that the outputs of the neuronal network, corresponding to different inputs, be linearly separable. Previous studies have demonstrated this by either using only a small set of input stimulus patterns to the culture (Hafizovic et al., 2007), large number of input electrodes (Dockendorf et al., 2009) or by using complex schemes to post-process the outputs of the neuronal culture prior to linear discriminance (Ortman et al., 2011). In this study we explore ways to temporally encode inputs into stimulus patterns using a small set of electrodes such that the neuronal culture's output can be directly decoded by simple linear discriminants based on perceptrons. We demonstrate that network can detect the timing and order of firing of inputs on multiple electrodes. Based on this, we demonstrate that the neuronal culture can be used as a kernel to transform inputs which are not linearly separable in a low dimensional space, into outputs in a high dimension where they are linearly separable. Thus simple linear discriminants can now be directly connected to outputs of the neuronal culture and allow for implementation of any function for such a hybrid system. Copyright © 2014. Published by Elsevier Ireland Ltd.

  11. Statistical physics, optimization and source coding

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 64; Issue 6. Statistical physics, optimization and source coding. Riccardo Zecchina. Invited Talks:- Topic 12. Other applications of statistical physics (networks, traffic flows, algorithmic problems, econophysics, astrophysical applications, etc.) Volume 64 Issue 6 June ...

  12. LASIP-III, a generalized processor for standard interface files. [For creating binary files from BCD input data and printing binary file data in BCD format (devised for fast reactor physics codes)

    Energy Technology Data Exchange (ETDEWEB)

    Bosler, G.E.; O' Dell, R.D.; Resnik, W.M.

    1976-03-01

    The LASIP-III code was developed for processing Version III standard interface data files which have been specified by the Committee on Computer Code Coordination. This processor performs two distinct tasks, namely, transforming free-field format, BCD data into well-defined binary files and providing for printing and punching data in the binary files. While LASIP-III is exported as a complete free-standing code package, techniques are described for easily separating the processor into two modules, viz., one for creating the binary files and one for printing the files. The two modules can be separated into free-standing codes or they can be incorporated into other codes. Also, the LASIP-III code can be easily expanded for processing additional files, and procedures are described for such an expansion. 2 figures, 8 tables.

  13. DOG-IV revised input generator program for DORT-TORT code

    Energy Technology Data Exchange (ETDEWEB)

    Saitou, Masato; Handa, Hiroyuki; Hayashi, Katsumi [Hitachi Engineering Co. Ltd., Ibaraki (Japan); Kamogawa, Susumu; Yamada, Koubun [Business Automation Co. Ltd., Tokyo (Japan)

    2000-03-01

    We developed new version; DOG-IV program in which the function of generating input data for DORT-TORT code was added to old function and operating performance has improved. All the input data is created by interactive method in front of color display without looking for the DORT-TORT manual. Also the geometry related input are easily created without calculation of precise curved mesh point. By using the DOG-IV, reliable input data for the DORT-TORT is obtained easily and quickly and time saving for preparing input data could be expected. (author)

  14. INTRA/Mod3.2. Manual and Code Description. Volume I - Physical Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Jenny; Edlund, O.; Hermann, J.; Johansson, Lise-Lotte

    1999-01-01

    The INTRA Manual consists of two volumes. Volume I of the manual is a thorough description of the code INTRA, the Physical modelling of INTRA and the ruling numerical methods and volume II, the User`s Manual is an input description. This document, the Physical modelling of INTRA, contains code characteristics, integration methods and applications

  15. Evaluation of severe accident risks: Quantification of major input parameters: MAACS (MELCOR Accident Consequence Code System) input

    Energy Technology Data Exchange (ETDEWEB)

    Sprung, J.L.; Jow, H-N (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Helton, J.C. (Arizona State Univ., Tempe, AZ (USA))

    1990-12-01

    Estimation of offsite accident consequences is the customary final step in a probabilistic assessment of the risks of severe nuclear reactor accidents. Recently, the Nuclear Regulatory Commission reassessed the risks of severe accidents at five US power reactors (NUREG-1150). Offsite accident consequences for NUREG-1150 source terms were estimated using the MELCOR Accident Consequence Code System (MACCS). Before these calculations were performed, most MACCS input parameters were reviewed, and for each parameter reviewed, a best-estimate value was recommended. This report presents the results of these reviews. Specifically, recommended values and the basis for their selection are presented for MACCS atmospheric and biospheric transport, emergency response, food pathway, and economic input parameters. Dose conversion factors and health effect parameters are not reviewed in this report. 134 refs., 15 figs., 110 tabs.

  16. 78 FR 24725 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2013-04-26

    ... Wastewater Treatment and Collection Facilities. NFPA 900--2013 Building Energy Code... 1/3/2014 NFPA 901.../2015 Extraction Plants. NFPA 40--2011 Standard for the 7/8/2013 Storage and Handling of Cellulose...

  17. Design of a Code-Maker Translator Assistive Input Device with a Contest Fuzzy Recognition Algorithm for the Severely Disabled

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2015-01-01

    Full Text Available This study developed an assistive system for the severe physical disabilities, named “code-maker translator assistive input device” which utilizes a contest fuzzy recognition algorithm and Morse codes encoding to provide the keyboard and mouse functions for users to access a standard personal computer, smartphone, and tablet PC. This assistive input device has seven features that are small size, easy installing, modular design, simple maintenance, functionality, very flexible input interface selection, and scalability of system functions, when this device combined with the computer applications software or APP programs. The users with severe physical disabilities can use this device to operate the various functions of computer, smartphone, and tablet PCs, such as sending e-mail, Internet browsing, playing games, and controlling home appliances. A patient with a brain artery malformation participated in this study. The analysis result showed that the subject could make himself familiar with operating of the long/short tone of Morse code in one month. In the future, we hope this system can help more people in need.

  18. Input-dependent frequency modulation of cortical gamma oscillations shapes spatial synchronization and enables phase coding.

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-02-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25-80Hz) may play a significant role in information processing, for example by neural grouping ('binding') and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency codes

  19. Input-Dependent Frequency Modulation of Cortical Gamma Oscillations Shapes Spatial Synchronization and Enables Phase Coding

    Science.gov (United States)

    Lowet, Eric; Roberts, Mark; Hadjipapas, Avgis; Peter, Alina; van der Eerden, Jan; De Weerd, Peter

    2015-01-01

    Fine-scale temporal organization of cortical activity in the gamma range (∼25–80Hz) may play a significant role in information processing, for example by neural grouping (‘binding’) and phase coding. Recent experimental studies have shown that the precise frequency of gamma oscillations varies with input drive (e.g. visual contrast) and that it can differ among nearby cortical locations. This has challenged theories assuming widespread gamma synchronization at a fixed common frequency. In the present study, we investigated which principles govern gamma synchronization in the presence of input-dependent frequency modulations and whether they are detrimental for meaningful input-dependent gamma-mediated temporal organization. To this aim, we constructed a biophysically realistic excitatory-inhibitory network able to express different oscillation frequencies at nearby spatial locations. Similarly to cortical networks, the model was topographically organized with spatially local connectivity and spatially-varying input drive. We analyzed gamma synchronization with respect to phase-locking, phase-relations and frequency differences, and quantified the stimulus-related information represented by gamma phase and frequency. By stepwise simplification of our models, we found that the gamma-mediated temporal organization could be reduced to basic synchronization principles of weakly coupled oscillators, where input drive determines the intrinsic (natural) frequency of oscillators. The gamma phase-locking, the precise phase relation and the emergent (measurable) frequencies were determined by two principal factors: the detuning (intrinsic frequency difference, i.e. local input difference) and the coupling strength. In addition to frequency coding, gamma phase contained complementary stimulus information. Crucially, the phase code reflected input differences, but not the absolute input level. This property of relative input-to-phase conversion, contrasting with latency

  20. RELAP5/MOD3 code manual: User`s guide and input requirements. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. Volume II contains detailed instructions for code application and input data preparation.

  1. The neutron transport code DTF-Traca users manual and input data

    Energy Technology Data Exchange (ETDEWEB)

    Ahnert, C.

    1979-07-01

    This is a users manual of the neutron transport code DTF-TRACA, which is a version of the original DTF-IV with some modifications made at JEN. A detailed input data descriptions is given. The new options developed at JEN are included too. (Author) 18 refs.

  2. ATHENA AIDE: An expert system for ATHENA code input model preparation

    Energy Technology Data Exchange (ETDEWEB)

    Fink, R.K.; Callow, R.A.; Larson, T.K.; Ransom, V.H.

    1987-01-01

    An expert system called the ATHENA AIDE that assists in the preparation of input models for the ATHENA thermal-hydraulics code has been developed by researchers at the Idaho National Engineering Laboratory. The ATHENA AIDE uses a menu driven graphics interface and rule-based and object-oriented programming techniques to assist users of the ATHENA code in performing the tasks involved in preparing the card image input files required to run ATHENA calculations. The ATHENA AIDE was developed and currently runs on single-user Xerox artificial intelligence workstations. Experience has shown that the intelligent modeling environment provided by the ATHENA AIDE expert system helps ease the modeling task by relieving the analyst of many mundane, repetitive, and error prone procedures involved in the construction of an input model. This reduces errors in the resulting models, helps promote standardized modeling practices, and allows models to be constructed more quickly than was previously possible. 5 refs., 4 figs.

  3. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    Science.gov (United States)

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  4. TASS/SMR Code Topical Report for SMART Plant, Vol II: User's Guide and Input Requirement

    Energy Technology Data Exchange (ETDEWEB)

    Kim, See Darl; Kim, Soo Hyoung; Kim, Hyung Rae (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  5. Speech input system for meat inspection and pathological coding used thereby

    Science.gov (United States)

    Abe, Shozo

    Meat inspection is one of exclusive and important jobs of veterinarians though it is not well known in general. As the inspection should be conducted skillfully during a series of continuous operations in a slaughter house, development of automatic inspecting systems has been required for a long time. We employed a hand-free speech input system to record the inspecting data because inspecters have to use their both hands to treat the internals of catles and check their health conditions by necked eyes. The data collected by the inspectors are transfered to a speech recognizer and then stored as controlable data of each catle inspected. Control of terms such as pathological conditions to be input and their coding are also important in this speech input system and practical examples are shown.

  6. Physical and Monetary Input-Output Analysis: What Makes the Difference?

    OpenAIRE

    Helga Weisz; Faye Duchin

    2004-01-01

    A recent paper in which embodied land appropriation of exports was calculated using a physical input-output model (Ecological Economics 44 (2003) 137-151) initiated a discussion in this journal concerning the conceptual differences between input-output models using a coefficient matrix based on physical input-output tables (PIOTs) in a single unit of mass and input-output models using a coefficient matrix based on monetary input-output tables (MIOTs) extended by a coefficient vector of physic...

  7. Physics behind the mechanical nucleosome positioning code.

    Science.gov (United States)

    Zuiddam, Martijn; Everaers, Ralf; Schiessel, Helmut

    2017-11-01

    The positions along DNA molecules of nucleosomes, the most abundant DNA-protein complexes in cells, are influenced by the sequence-dependent DNA mechanics and geometry. This leads to the "nucleosome positioning code", a preference of nucleosomes for certain sequence motives. Here we introduce a simplified model of the nucleosome where a coarse-grained DNA molecule is frozen into an idealized superhelical shape. We calculate the exact sequence preferences of our nucleosome model and find it to reproduce qualitatively all the main features known to influence nucleosome positions. Moreover, using well-controlled approximations to this model allows us to come to a detailed understanding of the physics behind the sequence preferences of nucleosomes.

  8. Development status of reactor physics codes in cosine project

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yixue; Yu, Hui; Liu, Zhanquan; Zhang, Bin; Qiu, Chunhua; Wang, Changhui; Wang, Su; Hu, Xiaoyu; Xu, Hua; Li, Shuo; Liu, Zhiyan; Shen, Feng; Yang, Yanhua [State Nuclear Power Software Development Center, Beijing (China)

    2012-03-15

    COre and System Integrated Engine for design and analysis (COSINE) is the newly approved NPP software R and D key project in China. This paper gives an overview of the COSINE code package, especially the reactor physics subsystem. The code architecture, the theoretical physics model and the current progress are introduced. A brief summary behind the code package is outlined. The detailed discussion here is limited to the lattice physics code (LATC), core analysis code (CORE) and neutron kinetics code (KIND) which are currently under development.

  9. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology

    Directory of Open Access Journals (Sweden)

    Shuo Chen

    2018-01-01

    Full Text Available As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D TCAI architecture based on single input multiple output (SIMO technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  10. The Balance of Excitatory and Inhibitory Synaptic Inputs for Coding Sound Location

    Science.gov (United States)

    Ono, Munenori

    2014-01-01

    The localization of high-frequency sounds in the horizontal plane uses an interaural-level difference (ILD) cue, yet little is known about the synaptic mechanisms that underlie processing this cue in the inferior colliculus (IC) of mouse. Here, we study the synaptic currents that process ILD in vivo and use stimuli in which ILD varies around a constant average binaural level (ABL) to approximate sounds on the horizontal plane. Monaural stimulation in either ear produced EPSCs and IPSCs in most neurons. The temporal properties of monaural responses were well matched, suggesting connected functional zones with matched inputs. The EPSCs had three patterns in response to ABL stimuli, preference for the sound field with the highest level stimulus: (1) contralateral; (2) bilateral highly lateralized; or (3) at the center near 0 ILD. EPSCs and IPSCs were well correlated except in center-preferred neurons. Summation of the monaural EPSCs predicted the binaural excitatory response but less well than the summation of monaural IPSCs. Binaural EPSCs often showed a nonlinearity that strengthened the response to specific ILDs. Extracellular spike and intracellular current recordings from the same neuron showed that the ILD tuning of the spikes was sharper than that of the EPSCs. Thus, in the IC, balanced excitatory and inhibitory inputs may be a general feature of synaptic coding for many types of sound processing. PMID:24599475

  11. The r-Java 2.0 code: nuclear physics

    Science.gov (United States)

    Kostka, M.; Koning, N.; Shand, Z.; Ouyed, R.; Jaikumar, P.

    2014-08-01

    Aims: We present r-Java 2.0, a nucleosynthesis code for open use that performs r-process calculations, along with a suite of other analysis tools. Methods: Equipped with a straightforward graphical user interface, r-Java 2.0 is capable of simulating nuclear statistical equilibrium (NSE), calculating r-process abundances for a wide range of input parameters and astrophysical environments, computing the mass fragmentation from neutron-induced fission and studying individual nucleosynthesis processes. Results: In this paper we discuss enhancements to this version of r-Java, especially the ability to solve the full reaction network. The sophisticated fission methodology incorporated in r-Java 2.0 that includes three fission channels (beta-delayed, neutron-induced, and spontaneous fission), along with computation of the mass fragmentation, is compared to the upper limit on mass fission approximation. The effects of including beta-delayed neutron emission on r-process yield is studied. The role of Coulomb interactions in NSE abundances is shown to be significant, supporting previous findings. A comparative analysis was undertaken during the development of r-Java 2.0 whereby we reproduced the results found in the literature from three other r-process codes. This code is capable of simulating the physical environment of the high-entropy wind around a proto-neutron star, the ejecta from a neutron star merger, or the relativistic ejecta from a quark nova. Likewise the users of r-Java 2.0 are given the freedom to define a custom environment. This software provides a platform for comparing proposed r-process sites.

  12. VISWAM. A computer code package for thermal reactor physics computations

    Energy Technology Data Exchange (ETDEWEB)

    Jagannathan, V.; Thiyagarajan, T.K.; Ganesan, S.; Jain, R.P.; Pal, U. [Bhabha Atomic Research Centre, Mumbai (India); Karthikeyan, R. [Ecole Polytechnique de Montreal, Montreal, Quebec (Canada)

    2004-07-01

    The nuclear cross section data and reactor physics design methods developed over the past three decades have attained a high degree of reliability for thermal power reactor design and analysis. This is borne out from the analysis of physics commissioning experiments and several reactor-years of operational experience of two types of Indian thermal power reactors, viz. BWRs and PHWRs. Our computational tools were also developed and tested against a large number of IAEA CRP benchmarks on in-core fuel management code package validation for the modern BWR, PWR, VVER and PHWR. Though the computational algorithms are well tested, their mode of use has remained rather obsolete since the codes were developed when the modern high-speed large memory computers were not available. The use of Fortran language limits their potential use for varied applications. We are developing specific Visual Interface Software as the Work Aid support for effective Man-Machine interface (VISWAM). The VISWAM package when fully developed and tested will enable handling the input description of complex fuel assembly and the reactor core geometry with immaculate ease. Selective display of the three dimensional distribution of multi-group fluxes, power distribution and hot spots will provide a good insight into the analysis and also enable inter comparison of different nuclear datasets and methods. Since the new package will be user-friendly, training of requisite human resource for the expanding Indian nuclear power programme will be rendered easier and the gap between an expert and a novice will be greatly reduced. (author)

  13. Optimizing Nuclear Physics Codes on the XT5

    Energy Technology Data Exchange (ETDEWEB)

    Hartman-Baker, Rebecca J [ORNL; Nam, Hai Ah [ORNL

    2011-01-01

    Scientists studying the structure and behavior of the atomic nucleus require immense high-performance computing resources to gain scientific insights. Several nuclear physics codes are capable of scaling to more than 100,000 cores on Oak Ridge National Laboratory's petaflop Cray XT5 system, Jaguar. In this paper, we present our work on optimizing codes in the nuclear physics domain.

  14. New Physics Search with Precision Experiments: Theory Input

    Science.gov (United States)

    Aleksejevs, A.; Barkanova, S.; Wu, S.; Zykunov, V.

    2016-04-01

    The best way to search for new physics is by using a diverse set of probes - not just experiments at the energy and the cosmic frontiers, but also the low-energy measurements relying on high precision and high luminosity. One example of such ultra-precision experiments is the MOLLER experiment planned at JLab, which will measure the parity-violating electron-electron scattering asymmetry and allow a determination of the weak mixing angle with a factor of five improvement in precision over its predecessor, E-158. At this precision, any inconsistency with the Standard Model should signal new physics. The paper will explore how new physics particles enter at the next-to-leading order one-loop level. For MOLLER we analyze the effects of dark Z'-boson on the total calculated asymmetry, and show how this new physics interaction carriers may influence the analysis of the future experimental results.

  15. Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels

    Directory of Open Access Journals (Sweden)

    Yun Mao

    2012-01-01

    Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.

  16. Establishing confidence in complex physics codes: Art or science?

    Energy Technology Data Exchange (ETDEWEB)

    Trucano, T. [Sandia National Labs., Albuquerque, NM (United States)

    1997-12-31

    The ALEGRA shock wave physics code, currently under development at Sandia National Laboratories and partially supported by the US Advanced Strategic Computing Initiative (ASCI), is generic to a certain class of physics codes: large, multi-application, intended to support a broad user community on the latest generation of massively parallel supercomputer, and in a continual state of formal development. To say that the author has ``confidence`` in the results of ALEGRA is to say something different than that he believes that ALEGRA is ``predictive.`` It is the purpose of this talk to illustrate the distinction between these two concepts. The author elects to perform this task in a somewhat historical manner. He will summarize certain older approaches to code validation. He views these methods as aiming to establish the predictive behavior of the code. These methods are distinguished by their emphasis on local information. He will conclude that these approaches are more art than science.

  17. High-fidelity plasma codes for burn physics

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Graziani, Frank [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Murillo, Michael [Michigan State Univ., East Lansing, MI (United States)

    2016-10-19

    Accurate predictions of equation of state (EOS), ionic and electronic transport properties are of critical importance for high-energy-density plasma science. Transport coefficients inform radiation-hydrodynamic codes and impact diagnostic interpretation, which in turn impacts our understanding of the development of instabilities, the overall energy balance of burning plasmas, and the efficacy of self-heating from charged-particle stopping. Important processes include thermal and electrical conduction, electron-ion coupling, inter-diffusion, ion viscosity, and charged particle stopping. However, uncertainties in these coefficients are not well established. Fundamental plasma science codes, also called high-fidelity plasma codes, are a relatively recent computational tool that augments both experimental data and theoretical foundations of transport coefficients. This paper addresses the current status of HFPC codes and their future development, and the potential impact they play in improving the predictive capability of the multi-physics hydrodynamic codes used in HED design.

  18. A parametric study of MELCOR Accident Consequence Code System 2 (MACCS2) Input Values for the Predicted Health Effect

    Energy Technology Data Exchange (ETDEWEB)

    Kim, So Ra; Min, Byung Il; Park, Ki Hyun; Yang, Byung Mo; Suh, Kyung Suk [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    The MELCOR Accident Consequence Code System 2, MACCS2, has been the most widely used through the world among the off-site consequence analysis codes. MACCS2 code is used to estimate the radionuclide concentrations, radiological doses, health effects, and economic consequences that could result from the hypothetical nuclear accidents. Most of the MACCS model parameter values are defined by the user and those input parameters can make a significant impact on the output. A limited parametric study was performed to identify the relative importance of the values of each input parameters in determining the predicted early and latent health effects in MACCS2. These results would not be applicable to every case of the nuclear accidents, because only the limited calculation was performed with Kori-specific data. The endpoints of the assessment were early- and latent cancer-risk in the exposed population, therefore it might produce the different results with the parametric studies for other endpoints, such as contamination level, absorbed dose, and economic cost. Accident consequence assessment is important for decision making to minimize the health effect from radiation exposure, accordingly the sufficient parametric studies are required for the various endpoints and input parameters in further research.

  19. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  20. Recent improvements of reactor physics codes in MHI

    Science.gov (United States)

    Kosaka, Shinya; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki

    2015-12-01

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO's Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  1. Recent improvements of reactor physics codes in MHI

    Energy Technology Data Exchange (ETDEWEB)

    Kosaka, Shinya, E-mail: shinya-kosaka@mhi.co.jp; Yamaji, Kazuya; Kirimura, Kazuki; Kamiyama, Yohei; Matsumoto, Hideki [Mitsubishi Heavy Industries, Ltd. (Japan)

    2015-12-31

    This paper introduces recent improvements for reactor physics codes in Mitsubishi Heavy Industries, Ltd(MHI). MHI has developed a new neutronics design code system Galaxy/Cosmo-S(GCS) for PWR core analysis. After TEPCO’s Fukushima Daiichi accident, it is required to consider design extended condition which has not been covered explicitly by the former safety licensing analyses. Under these circumstances, MHI made some improvements for GCS code system. A new resonance calculation model of lattice physics code and homogeneous cross section representative model for core simulator have been developed to apply more wide range core conditions corresponding to severe accident status such like anticipated transient without scram (ATWS) analysis and criticality evaluation of dried-up spent fuel pit. As a result of these improvements, GCS code system has very wide calculation applicability with good accuracy for any core conditions as far as fuel is not damaged. In this paper, the outline of GCS code system is described briefly and recent relevant development activities are presented.

  2. On the average complexity of sphere decoding in lattice space-time coded multiple-input multiple-output channel

    KAUST Repository

    Abediseid, Walid

    2012-12-21

    The exact average complexity analysis of the basic sphere decoder for general space-time codes applied to multiple-input multiple-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi- static, lattice space-time (LAST) coded MIMO channel. Specifically, we drive an upper bound of the tail distribution of the decoder\\'s computational complexity. We show that when the computational complexity exceeds a certain limit, this upper bound becomes dominated by the outage probability achieved by LAST coding and sphere decoding schemes. We then calculate the minimum average computational complexity that is required by the decoder to achieve near optimal performance in terms of the system parameters. Our results indicate that there exists a cut-off rate (multiplexing gain) for which the average complexity remains bounded. Copyright © 2012 John Wiley & Sons, Ltd.

  3. SCDAP/RELAP5/MOD 3.1 code manual: User`s guide and input manual. Volume 3

    Energy Technology Data Exchange (ETDEWEB)

    Coryell, E.W.; Johnsen, E.C. [eds.; Allison, C.M. [and others

    1995-06-01

    The SCDAP/RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during a severe accident. The code models the coupled behavior of the reactor coolant system, core, fission product released during a severe accident transient as well as large and small break loss of coolant accidents, operational transients such as anticipated transient without SCRAM, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling approach is used that permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater conditioning systems. This volume provides guidelines to code users based upon lessons learned during the developmental assessment process. A description of problem control and the installation process is included. Appendix a contains the description of the input requirements.

  4. Random Access with Physical-layer Network Coding

    NARCIS (Netherlands)

    Goseling, J.; Gastpar, M.C.; Weber, J.H.

    2013-01-01

    Leveraging recent progress in compute-and-forward we propose an approach to random access that is based on physical-layer network coding: When packets collide, it is possible to recover a linear combination of the packets at the receiver. Over many rounds of transmission, the receiver can thus

  5. Hadronic models for cosmic ray physics: the FLUKA code

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, G. [INFN, Sezione di Milano and Universita di Milano, Dip. di Fisica, via Celoria 16, I-20133 Milano (Italy); Cerutti, F. [CERN, CH-1211 Geneva 23 (Switzerland); Empl, A. [University of Houston, Department of Physics, TX 77204-5005 Houston, US (United States); Fasso, A. [SLAC, Stanford, CA 94025, US (United States); Ferrari, A. [CERN, CH-1211 Geneva 23 (Switzerland); Gadioli, E. [INFN, Sezione di Milano and Universita di Milano, Dip. di Fisica, via Celoria 16, I-20133 Milano (Italy); Garzelli, M.V. [INFN, Sezione di Milano and Universita di Milano, Dip. di Fisica, via Celoria 16, I-20133 Milano (Italy)], E-mail: garzelli@mi.infn.it; Muraro, S. [INFN, Sezione di Milano and Universita di Milano, Dip. di Fisica, via Celoria 16, I-20133 Milano (Italy); Pelliccioni, M. [INFN, via Fermi 40, I-00044 Frascati (Rome) (Italy); Pinsky, L.S. [University of Houston, Department of Physics, TX 77204-5005 Houston, US (United States); Ranft, J. [Siegen University, Fachbereich 7 - Physik, D-57068 Siegen (Germany); Roesler, S. [CERN, CH-1211 Geneva 23 (Switzerland); Sala, P.R. [INFN, Sezione di Milano and Universita di Milano, Dip. di Fisica, via Celoria 16, I-20133 Milano (Italy); Villari, R. [ENEA, via Fermi 45, I-00044 Frascati (Rome) (Italy)

    2008-01-15

    FLUKA is a general purpose Monte Carlo transport and interaction code used for fundamental physics and for a wide range of applications. These include cosmic ray physics (muons, neutrinos, extensive air showers, underground physics), both for basic research and applied studies in space and atmospheric flight dosimetry and radiation damage. A review of the hadronic models available in FLUKA and relevant for the description of cosmic ray air showers is presented in this paper. Recent improvements concerning these models are discussed. The FLUKA capabilities in the simulation of the formation and propagation of EM and hadronic showers in the terrestrial atmosphere are shown.

  6. Hadronic models for cosmic ray physics the FLUKA code

    CERN Document Server

    Battistoni, G; Gadioli, E; Muraro, S; Sala, P R; Fassò, A; Ferrari, A; Roesler, S; Cerutti, F; Ranft, J; Pinsky, L S; Empl, A; Pelliccioni, M; Villari, R

    2008-01-01

    FLUKA is a general purpose Monte Carlo transport and interaction code used for fundamental physics and for a wide range of applications. These include Cosmic Ray Physics (muons, neutrinos, EAS, underground physics), both for basic research and applied studies in space and atmospheric flight dosimetry and radiation damage. A review of the hadronic models available in FLUKA and relevant for the description of cosmic ray air showers is presented in this paper. Recent updates concerning these models are discussed. The FLUKA capabilities in the simulation of the formation and propagation of EM and hadronic showers in the Earth's atmosphere are shown.

  7. Code-mixing in signs and words in input to and output from children

    NARCIS (Netherlands)

    Baker, A.; Van den Bogaerde, B.; Plaza-Pust, C.; Morales-López, E.

    2008-01-01

    Drawing on a longitudinal data collection of six children (three hearing, three deaf) learning Dutch and Sign Language of the Netherlands (NGT) in deaf families, this chapter explores the amount and types of simultaneous mixing (code-blending) of signed and spoken language elements in the children’s

  8. Adaptive Neural Coding Dependent on the Time-Varying Statistics of the Somatic Input Current

    OpenAIRE

    Shin, Jonghan; Koch, Christof; Douglas, Rodney

    1999-01-01

    It is generally assumed that nerve cells optimize their performance to reflect the statistics of their input. Electronic circuit analogs of neurons require similar methods of self-optimization for stable and autonomous operation. We here describe and demonstrate a biologically plausible adaptive algorithm that enables a neuron to adapt the current threshold and the slope (or gain) of its current-frequency relationship to match the mean (or dc offset) and variance (or dynamic range or contrast...

  9. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  10. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2011-07-15

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  11. Enhanced verification test suite for physics simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.; Cotrell, David L.; Johnson, Bryan; Knupp, Patrick; Rider, William J.; Trucano, Timothy G.; Weirs, V. Gregory

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  12. A primer on physical-layer network coding

    CERN Document Server

    Liew, Soung Chang; Zhang, Shengli

    2015-01-01

    The concept of physical-layer network coding (PNC) was proposed in 2006 for application in wireless networks. Since then it has developed into a subfield of communications and networking with a wide following. This book is a primer on PNC. It is the outcome of a set of lecture notes for a course for beginning graduate students at The Chinese University of Hong Kong. The target audience is expected to have some prior background knowledge in communication theory and wireless communications, but not working knowledge at the research level. Indeed, a goal of this book/course is to allow the reader

  13. The cosmic code quantum physics as the language of nature

    CERN Document Server

    Pagels, Heinz R

    2012-01-01

    ""The Cosmic Code can be read by anyone. I heartily recommend it!"" - The New York Times Book Review""A reliable guide for the nonmathematical reader across the highest ridges of physical theory. Pagels is unfailingly lighthearted and confident."" - Scientific American""A sound, clear, vital work that deserves the attention of anyone who takes an interest in the relationship between material reality and the human mind."" - Science 82This is one of the most important books on quantum mechanics ever written for general readers. Heinz Pagels, an eminent physicist and science writer, discusses and

  14. Channel estimation for physical layer network coding systems

    CERN Document Server

    Gao, Feifei; Wang, Gongpu

    2014-01-01

    This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa

  15. The Physical Models and Statistical Procedures Used in the RACER Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, T.M.; Brown, F.B.; Bischoff, F.G.; MacMillan, D.B.; Ellis, C.L.; Ward, J.T.; Ballinger, C.T.; Kelly, D.J.; Schindler, L.

    1999-07-01

    This report describes the MCV (Monte Carlo - Vectorized)Monte Carlo neutron transport code [Brown, 1982, 1983; Brown and Mendelson, 1984a]. MCV is a module in the RACER system of codes that is used for Monte Carlo reactor physics analysis. The MCV module contains all of the neutron transport and statistical analysis functions of the system, while other modules perform various input-related functions such as geometry description, material assignment, output edit specification, etc. MCV is very closely related to the 05R neutron Monte Carlo code [Irving et al., 1965] developed at Oak Ridge National Laboratory. 05R evolved into the 05RR module of the STEMB system, which was the forerunner of the RACER system. Much of the overall logic and physics treatment of 05RR has been retained and, indeed, the original verification of MCV was achieved through comparison with STEMB results. MCV has been designed to be very computationally efficient [Brown, 1981, Brown and Martin, 1984b; Brown, 1986]. It was originally programmed to make use of vector-computing architectures such as those of the CDC Cyber- 205 and Cray X-MP. MCV was the first full-scale production Monte Carlo code to effectively utilize vector-processing capabilities. Subsequently, MCV was modified to utilize both distributed-memory [Sutton and Brown, 1994] and shared memory parallelism. The code has been compiled and run on platforms ranging from 32-bit UNIX workstations to clusters of 64-bit vector-parallel supercomputers. The computational efficiency of the code allows the analyst to perform calculations using many more neutron histories than is practical with most other Monte Carlo codes, thereby yielding results with smaller statistical uncertainties. MCV also utilizes variance reduction techniques such as survival biasing, splitting, and rouletting to permit additional reduction in uncertainties. While a general-purpose neutron Monte Carlo code, MCV is optimized for reactor physics calculations. It has the

  16. RDS - A systematic approach towards system thermal hydraulics input code development for a comprehensive deterministic safety analysis

    Science.gov (United States)

    Salim, Mohd Faiz; Roslan, Ridha; Ibrahim, Mohd Rizal Mamat @

    2014-02-01

    Deterministic Safety Analysis (DSA) is one of the mandatory requirements conducted for Nuclear Power Plant licensing process, with the aim of ensuring safety compliance with relevant regulatory acceptance criteria. DSA is a technique whereby a set of conservative deterministic rules and requirements are applied for the design and operation of facilities or activities. Computer codes are normally used to assist in performing all required analysis under DSA. To ensure a comprehensive analysis, the conduct of DSA should follow a systematic approach. One of the methodologies proposed is the Standardized and Consolidated Reference Experimental (and Calculated) Database (SCRED) developed by University of Pisa. Based on this methodology, the use of Reference Data Set (RDS) as a pre-requisite reference document for developing input nodalization was proposed. This paper shall describe the application of RDS with the purpose of assessing its effectiveness. Two RDS documents were developed for an Integral Test Facility of LOBI-MOD2 and associated Test A1-83. Data and information from various reports and drawings were referred in preparing the RDS. The results showed that by developing RDS, it has made possible to consolidate all relevant information in one single document. This is beneficial as it enables preservation of information, promotes quality assurance, allows traceability, facilitates continuous improvement, promotes solving of contradictions and finally assisting in developing thermal hydraulic input regardless of whichever code selected. However, some disadvantages were also recognized such as the need for experience in making engineering judgments, language barrier in accessing foreign information and limitation of resources. Some possible improvements are suggested to overcome these challenges.

  17. Visual input enhancement via essay coding results in deaf learners' long-term retention of improved English grammatical knowledge.

    Science.gov (United States)

    Berent, Gerald P; Kelly, Ronald R; Schmitz, Kathryn L; Kenney, Patricia

    2009-01-01

    This study explored the efficacy of visual input enhancement, specifically essay enhancement, for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the results of delayed assessment documented students' significant retention of that improvement five and a half months beyond the instructional intervention period. Essay enhancement served to highlight, via a coding procedure, students' successful and unsuccessful production of discourse-required target grammatical structures. The procedure converted students' written communicative output into enhanced input for inducing noticing of grammatical form and, through essay revision, establishing form-meaning connections leading to acquisition. With its optimal design characteristics supported by theoretical and empirical research, essay enhancement is a highly effective methodology that can be easily implemented as primary or supplementary English instruction for deaf students. The results of this study hold great promise for facilitating deaf students' English language and literacy development and have broad implications for second-language research, teaching, and learning.

  18. Soft-input iterative channel estimation for bit-interleaved turbo-coded MIMO-OFDM systems

    Directory of Open Access Journals (Sweden)

    Olutayo O. Oyerinde

    2013-01-01

    Full Text Available Bit-interleaved coded modulation (BICM is a robust multiplexing technique for achieving multiplexing gain in multiple-input multiple-output (MIMO-orthogonal frequency division multiplexing (OFDM systems. However, in order to benefit maximally from the various advantages offered by BICM-based MIMO-OFDM systems, availability of accurate MIMO channel state information (CSI at the receiver end of the system is essential. Without accurate MIMO CSI, accurate MIMO demapping and coherent detection and decoding of the transmitted message symbols at the system�s receiver would be impossible. In such cases, the multiplexing gain offered by the BICM technique, as well as the higher data rate made possible by the MIMO-OFDM system, is not benefitted from in full. In this paper, we propose a soft input based decision-directed channel estimation scheme for the provision of MIMO CSI for coherent detection of signals in MIMO-OFDM systems. The proposed channel estimator works in iterative mode with a MIMO demapper and a turbo decoder, and is based on the fast data projection method (FDPM and the variable step size normalised least mean square (VSSNLMS algorithm. Simulation results of the proposed estimator based on the FDPM and VSSNLMS algorithms indicate better performance in comparison with the same estimator employing minimum mean square error criteria and deflated projection approximation subspace tracking algorithms for both slow- and fast-fading channel scenarios. The proposed estimator would be suitable for use at the receiver end of MIMO-OFDM wireless communication systems operating in either slow- or fast-fading channels.

  19. Study of nuclear physics input-parameters via high-resolution γ-ray spectroscopy

    Science.gov (United States)

    Scholz, Philipp; Heim, Felix; Mayer, Jan; Spieker, Mark; Zilges, Andreas

    2018-01-01

    For nucleosynthesis networks of isotopes heavier than those in the iron-peak region, reaction rates are often calculated within the Hauser-Feshbach statistical model. The accuracy of the predicted cross sections strongly depends on the uncertainties of the nuclear-physics input-parameters. These are nuclear-level densities, γ-ray strength functions, and particle+nucleus optical-model potentials. Here we report on the investigation of statistical properties of nuclei via radiative proton-capture reactions using high-resolution γ-ray spectroscopy.

  20. The Los Alamos suite of relativistic atomic physics codes

    Science.gov (United States)

    Fontes, C. J.; Zhang, H. L.; Abdallah, J., Jr.; Clark, R. E. H.; Kilcrease, D. P.; Colgan, J.; Cunningham, R. T.; Hakel, P.; Magee, N. H.; Sherrill, M. E.

    2015-07-01

    The Los Alamos suite of relativistic atomic physics codes is a robust, mature platform that has been used to model highly charged ions in a variety of ways. The suite includes capabilities for calculating data related to fundamental atomic structure, as well as the processes of photoexcitation, electron-impact excitation and ionization, photoionization and autoionization within a consistent framework. These data can be of a basic nature, such as cross sections and collision strengths, which are useful in making predictions that can be compared with experiments to test fundamental theories of highly charged ions, such as quantum electrodynamics. The suite can also be used to generate detailed models of energy levels and rate coefficients, and to apply them in the collisional-radiative modeling of plasmas over a wide range of conditions. Such modeling is useful, for example, in the interpretation of spectra generated by a variety of plasmas. In this work, we provide a brief overview of the capabilities within the Los Alamos relativistic suite along with some examples of its application to the modeling of highly charged ions.

  1. Physics models in the toroidal transport code PROCTR

    Energy Technology Data Exchange (ETDEWEB)

    Howe, H.C.

    1990-08-01

    The physics models that are contained in the toroidal transport code PROCTR are described in detail. Time- and space-dependent models are included for the plasma hydrogenic-ion, helium, and impurity densities, the electron and ion temperatures, the toroidal rotation velocity, and the toroidal current profile. Time- and depth-dependent models for the trapped and mobile hydrogenic particle concentrations in the wall and a time-dependent point model for the number of particles in the limiter are also included. Time-dependent models for neutral particle transport, neutral beam deposition and thermalization, fusion heating, impurity radiation, pellet injection, and the radial electric potential are included and recalculated periodically as the time-dependent models evolve. The plasma solution is obtained either in simple flux coordinates, where the radial shift of each elliptical, toroidal flux surface is included to maintain an approximate pressure equilibrium, or in general three-dimensional torsatron coordinates represented by series of helical harmonics. The detailed coupling of the plasma, scrape-off layer, limiter, and wall models through the neutral transport model makes PROCTR especially suited for modeling of recycling and particle control in toroidal plasmas. The model may also be used in a steady-state profile analysis mode for studying energy and particle balances starting with measured plasma profiles.

  2. Development of Teaching Materials for a Physical Chemistry Experiment Using the QR Code

    OpenAIRE

    吉村, 忠与志

    2008-01-01

    The development of teaching materials with the QR code was attempted in an educational environment using a mobile telephone. The QR code is not sufficiently utilized in education, and the current study is one of the first in the field. The QR code is encrypted. However, the QR code can be deciphered by mobile telephones, thus enabling the expression of text in a small space.Contents of "Physical Chemistry Experiment" which are available on the Internet are briefly summarized and simplified. T...

  3. A theory manual for multi-physics code coupling in LIME.

    Energy Technology Data Exchange (ETDEWEB)

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  4. Rainfall simulations on steep calanchi landscapes: Generating input parameters for physically based erosion modelling

    Science.gov (United States)

    Kaiser, Andreas; Buchholz, Arno; Neugirg, Fabian; Schindewolf, Marcus

    2016-04-01

    Calanchi landscapes in central Italy have been subject to geoscientific research since many years, not exclusively but especially for questions regarding soil erosion and land degradation. Seasonal dynamics play an important role for morphological processes within the Calanchi. As in most Mediterranean landscapes also in the research site at Val d'Orcia long and dry summers are ended by heavy rainfall events in autumn. The latter contribute to most of the annual sediment output of the incised hollows and can cause damage to agricultural land and infrastructures. While research for understanding Calanco development is of high importance, the complex morphology and thus limited accessibility impedes in situ works. To still improve the understanding of morphodynamics without unnecessarily impinging natural conditions a remote sensing and erosion modelling approach was carried out in the presented work. UAV and LiDAR based very high resolution digital surface models were produced and served as an input parameter for the raster and physically based soil erosion model EROSION3D. Additionally, data on infiltration, runoff generation and sediment detachment were generated with artificial rainfall simulations - the most invasive but unavoidable method. To increase the 1 m plot length virtually to around 20 m the sediment loaded runoff water was again introduced to the plot by a reflux system. Rather elaborate logistics were required to set up the simulator on strongly inclined slopes, to establish sufficient water supply and to secure the simulator on the slope but experiments produced plausible results and valuable input data for modelling. The model results are then compared to the repeated UAV and LiDAR campaigns and the resulting digital elevation models of difference. By simulating different rainfall and moisture scenarios and implementing in situ measured weather data runoff induced processes can be distinguished from gravitational slides and rockfall.

  5. Increasing Physical Layer Security through Scrambled Codes and ARQ

    CERN Document Server

    Baldi, Marco; Chiaraluce, Franco

    2011-01-01

    We develop the proposal of non-systematic channel codes on the AWGN wire-tap channel. Such coding technique, based on scrambling, achieves high transmission security with a small degradation of the eavesdropper's channel with respect to the legitimate receiver's channel. In this paper, we show that, by implementing scrambling and descrambling on blocks of concatenated frames, rather than on single frames, the channel degradation needed is further reduced. The usage of concatenated scrambling allows to achieve security also when both receivers experience the same channel quality. However, in this case, the introduction of an ARQ protocol with authentication is needed.

  6. Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

    CERN Multimedia

    2005-01-01

    Tech-X Corporation releases simulation code for solving complex problems in plasma physics : VORPAL code provides a robust environment for simulating plasma processes in high-energy physics, IC fabrications and material processing applications

  7. Visual Input Enhancement via Essay Coding Results in Deaf Learners' Long-Term Retention of Improved English Grammatical Knowledge

    Science.gov (United States)

    Berent, Gerald P.; Kelly, Ronald R.; Schmitz, Kathryn L.; Kenney, Patricia

    2009-01-01

    This study explored the efficacy of visual input enhancement, specifically "essay enhancement", for facilitating deaf college students' improvement in English grammatical knowledge. Results documented students' significant improvement immediately after a 10-week instructional intervention, a replication of recent research. Additionally, the…

  8. Encoded physics knowledge in checking codes for nuclear cross section libraries at Los Alamos

    Science.gov (United States)

    Parsons, D. Kent

    2017-09-01

    Checking procedures for processed nuclear data at Los Alamos are described. Both continuous energy and multi-group nuclear data are verified by locally developed checking codes which use basic physics knowledge and common-sense rules. A list of nuclear data problems which have been identified with help of these checking codes is also given.

  9. Data-driven input variable selection for rainfall-runoff modeling using binary-coded particle swarm optimization and Extreme Learning Machines

    Science.gov (United States)

    Taormina, Riccardo; Chau, Kwok-Wing

    2015-10-01

    Selecting an adequate set of inputs is a critical step for successful data-driven streamflow prediction. In this study, we present a novel approach for Input Variable Selection (IVS) that employs Binary-coded discrete Fully Informed Particle Swarm optimization (BFIPS) and Extreme Learning Machines (ELM) to develop fast and accurate IVS algorithms. A scheme is employed to encode the subset of selected inputs and ELM specifications into the binary particles, which are evolved using single objective and multi-objective BFIPS optimization (MBFIPS). The performances of these ELM-based methods are assessed using the evaluation criteria and the datasets included in the comprehensive IVS evaluation framework proposed by Galelli et al. (2014). From a comparison with 4 major IVS techniques used in their original study it emerges that the proposed methods compare very well in terms of selection accuracy. The best performers were found to be (1) a MBFIPS-ELM algorithm based on the concurrent minimization of an error function and the number of selected inputs, and (2) a BFIPS-ELM algorithm based on the minimization of a variant of the Akaike Information Criterion (AIC). The first technique is arguably the most accurate overall, and is able to reach an almost perfect specification of the optimal input subset for a partially synthetic rainfall-runoff experiment devised for the Kentucky River basin. In addition, MBFIPS-ELM allows for the determination of the relative importance of the selected inputs. On the other hand, the BFIPS-ELM is found to consistently reach high accuracy scores while being considerably faster. By extrapolating the results obtained on the IVS test-bed, it can be concluded that the proposed techniques are particularly suited for rainfall-runoff modeling applications characterized by high nonlinearity in the catchment dynamics.

  10. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    Energy Technology Data Exchange (ETDEWEB)

    Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-23

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  11. Statistical physics inspired energy-efficient coded-modulation for optical communications.

    Science.gov (United States)

    Djordjevic, Ivan B; Xu, Lei; Wang, Ting

    2012-04-15

    Because Shannon's entropy can be obtained by Stirling's approximation of thermodynamics entropy, the statistical physics energy minimization methods are directly applicable to the signal constellation design. We demonstrate that statistical physics inspired energy-efficient (EE) signal constellation designs, in combination with large-girth low-density parity-check (LDPC) codes, significantly outperform conventional LDPC-coded polarization-division multiplexed quadrature amplitude modulation schemes. We also describe an EE signal constellation design algorithm. Finally, we propose the discrete-time implementation of D-dimensional transceiver and corresponding EE polarization-division multiplexed system. © 2012 Optical Society of America

  12. Physical activity and influenza-coded outpatient visits, a population-based cohort study.

    Directory of Open Access Journals (Sweden)

    Eric Siu

    Full Text Available Although the benefits of physical activity in preventing chronic medical conditions are well established, its impacts on infectious diseases, and seasonal influenza in particular, are less clearly defined. We examined the association between physical activity and influenza-coded outpatient visits, as a proxy for influenza infection.We conducted a cohort study of Ontario respondents to Statistics Canada's population health surveys over 12 influenza seasons. We assessed physical activity levels through survey responses, and influenza-coded physician office and emergency department visits through physician billing claims. We used logistic regression to estimate the risk of influenza-coded outpatient visits during influenza seasons. The cohort comprised 114,364 survey respondents who contributed 357,466 person-influenza seasons of observation. Compared to inactive individuals, moderately active (OR 0.83; 95% CI 0.74-0.94 and active (OR 0.87; 95% CI 0.77-0.98 individuals were less likely to experience an influenza-coded visit. Stratifying by age, the protective effect of physical activity remained significant for individuals <65 years (active OR 0.86; 95% CI 0.75-0.98, moderately active: OR 0.85; 95% CI 0.74-0.97 but not for individuals ≥ 65 years. The main limitations of this study were the use of influenza-coded outpatient visits rather than laboratory-confirmed influenza as the outcome measure, the reliance on self-report for assessing physical activity and various covariates, and the observational study design.Moderate to high amounts of physical activity may be associated with reduced risk of influenza for individuals <65 years. Future research should use laboratory-confirmed influenza outcomes to confirm the association between physical activity and influenza.

  13. Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes

    Science.gov (United States)

    Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana

    2013-03-01

    The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.

  14. Development of Advanced Suite of Deterministic Codes for VHTR Physics Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, J. Y.; Lee, K. H. (and others)

    2007-07-15

    Advanced Suites of deterministic codes for VHTR physics analysis has been developed for detailed analysis of current and advanced reactor designs as part of a US-ROK collaborative I-NERI project. These code suites include the conventional 2-step procedure in which a few group constants are generated by a transport lattice calculation, and the reactor physics analysis is performed by a 3-dimensional diffusion calculation, and a whole core transport code that can model local heterogeneities directly at the core level. Particular modeling issues in physics analysis of the gas-cooled VHTRs were resolved, which include a double heterogeneity of the coated fuel particles, a neutron streaming in the coolant channels, a strong core-reflector interaction, and large spectrum shifts due to changes of the surrounding environment, temperature and burnup. And the geometry handling capability of the DeCART code were extended to deal with the hexagonal fuel elements of the VHTR core. The developed code suites were validated and verified by comparing the computational results with those of the Monte Carlo calculations for the benchmark problems.

  15. Glutamatergic input is coded by spike frequency at the soma and proximal dendrite of AII amacrine cells in the mouse retina.

    Science.gov (United States)

    Tamalu, Fuminobu; Watanabe, Shu-Ichi

    2007-06-01

    In the mammalian retina, AII amacrine cells play a crucial role in scotopic vision. They transfer rod signals from rod bipolar cells to the cone circuit, and divide these signals into the ON and OFF pathways at the discrete synaptic layers. AII amacrine cells have been reported to generate tetrodotoxin (TTX)-sensitive repetitive spikes of small amplitude. To investigate the properties of the spikes, we performed whole-cell patch-clamping of AII amacrine cells in mouse retinal slices. The spike frequency increased in proportion to the concentration of glutamate puffer-applied to the arboreal dendrite and to the intensity of the depolarizing current injection. The spike activity was suppressed by L-2-amino-4-phosphonobutyric acid, a glutamate analogue that hyperpolarizes rod bipolar cells, puffer-applied to the outer plexiform layer. Therefore, it is most likely that the spike frequency generated by AII amacrine cells is dependent on the excitatory glutamatergic input from rod bipolar cells. Gap junction blockers reduced the range of intensity of input with which spike frequency varies. Application of TTX to the soma and the proximal dendrite of AII amacrine cells blocked the voltage-gated Na(+) current significantly more than application to the arboreal dendrite, indicating that the Na(+) channels are mainly localized in these regions. Our results suggest that the intensity of the glutamatergic input from rod bipolar cells is coded by the spike frequency at the soma and the proximal dendrite of AII amacrine cells, raising the possibility that the spikes could contribute to the OFF pathway to enhance release of neurotransmitter.

  16. Applications of FLUKA Monte Carlo code for nuclear and accelerator physics

    CERN Document Server

    Battistoni, Giuseppe; Brugger, Markus; Campanella, Mauro; Carboni, Massimo; Empl, Anton; Fasso, Alberto; Gadioli, Ettore; Cerutti, Francesco; Ferrari, Alfredo; Ferrari, Anna; Lantz, Matthias; Mairani, Andrea; Margiotta, M; Morone, Christina; Muraro, Silvia; Parodi, Katerina; Patera, Vincenzo; Pelliccioni, Maurizio; Pinsky, Lawrence; Ranft, Johannes; Roesler, Stefan; Rollet, Sofia; Sala, Paola R; Santana, Mario; Sarchiapone, Lucia; Sioli, Maximiliano; Smirnov, George; Sommerer, Florian; Theis, Christian; Trovati, Stefania; Villari, R; Vincke, Heinz; Vincke, Helmut; Vlachoudis, Vasilis; Vollaire, Joachim; Zapp, Neil

    2011-01-01

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such top...

  17. Physical and Numerical Simulation of Aerodynamics of Cyclone Heating Device with Distributed Gas Input

    Directory of Open Access Journals (Sweden)

    E. N. Saburov

    2010-01-01

    Full Text Available The paper presents results of physical and numerical simulation of aerodynamics of a cyclone heating device. Calculation models of axial and radial flow motions at various outlet diameters and also cyclone flow motion trajectory have been developed in the paper. The paper considers and compares experimental and calculated distributions of tangential and axial component of full flow rate. The comparison of numerical and physical experimental results has revealed good prospects concerning usage of CFX ®10.0 programming complex for simulation of aerodynamics of cyclone heating devices and further improvement of methodologies and their aerodynamic calculation. 

  18. Simplified models for new physics in vector boson scattering. Input for Snowmass 2013

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, Juergen [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kilian, Wolfgang; Sekulla, Marco [Siegen Univ. (Germany). Theoretische Physik I

    2013-07-15

    In this contribution to the Snowmass process 2013 we give a brief review of how new physics could enter in the electroweak (EW) sector of the Standard Model (SM). This new physics, if it is directly accessible at low energies, can be parameterized by explicit resonances having certain quantum numbers. The extreme case is the decoupling limit where those resonances are very heavy and leave only traces in the form of deviations in the SM couplings. Translations are given into higher-dimensional operators leading to such deviations. As long as such resonances are introduced without a UV-complete theory behind it, these models suffer from unitarity violation of perturbative scattering amplitudes. We show explicitly how theoretically sane descriptions could be achieved by using a unitarization prescription that allows a correct description of such a resonance without specifying a UV-complete model.

  19. The Hadronic Models for Cosmic Ray Physics: the FLUKA Code Solutions

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, G.; Garzelli, M.V.; Gadioli, E.; Muraro, S.; Sala, P.R.; Fasso, A.; Ferrari, A.; Roesler, S.; Cerutti, F.; Ranft, J.; Pinsky, L.S.; Empl, A.; Pelliccioni, M.; Villari, R.; /INFN, Milan /Milan U. /SLAC /CERN /Siegen U. /Houston U. /Frascati /ENEA, Frascati

    2007-01-31

    FLUKA is a general purpose Monte Carlo transport and interaction code used for fundamental physics and for a wide range of applications. These include Cosmic Ray Physics (muons, neutrinos, EAS, underground physics), both for basic research and applied studies in space and atmospheric flight dosimetry and radiation damage. A review of the hadronic models available in FLUKA and relevant for the description of cosmic ray air showers is presented in this paper. Recent updates concerning these models are discussed. The FLUKA capabilities in the simulation of the formation and propagation of EM and hadronic showers in the Earth's atmosphere are shown.

  20. Preliminary analyses for HTTR`s start-up physics tests by Monte Carlo code MVP

    Energy Technology Data Exchange (ETDEWEB)

    Nojiri, Naoki [Science and Technology Agency, Tokyo (Japan); Nakano, Masaaki; Ando, Hiroei; Fujimoto, Nozomu; Takeuchi, Mitsuo; Fujisaki, Shingo; Yamashita, Kiyonobu

    1998-08-01

    Analyses of start-up physics tests for High Temperature Engineering Test Reactor (HTTR) have been carried out by Monte Carlo code MVP based on continuous energy method. Heterogeneous core structures were modified precisely, such as the fuel compacts, fuel rods, coolant channels, burnable poisons, control rods, control rod insertion holes, reserved shutdown pellet insertion holes, gaps between graphite blocks, etc. Such precise modification of the core structures was difficult with diffusion calculation. From the analytical results, the followings were confirmed; The first criticality will be achieved around 16 fuel columns loaded. The reactivity at the first criticality can be controlled by only one control rod located at the center of the core with other fifteen control rods fully withdrawn. The excess reactivity, reactor shutdown margin and control rod criticality positions have been evaluated. These results were used for planning of the start-up physics tests. This report presents analyses of start-up physics tests for HTTR by MVP code. (author)

  1. Eigen-Direction Alignment Based Physical-Layer Network Coding for MIMO Two-Way Relay Channels

    CERN Document Server

    Yang, Tao; Ping, Li; Collings, Iain B; Yuan, Jinhong

    2012-01-01

    In this paper, we propose a novel communication strategy which incorporates physical-layer network coding (PNC) into multiple-input multiple output (MIMO) two-way relay channels (TWRCs). At the heart of the proposed scheme lies a new key technique referred to as eigen-direction alignment (EDA) precoding. The EDA precoding efficiently aligns the two-user's eigen-modes into the same directions. Based on that, we carry out multi-stream PNC over the aligned eigen-modes. We derive an achievable rate of the proposed EDA-PNC scheme, based on nested lattice codes, over a MIMO TWRC. Asymptotic analysis shows that the proposed EDA-PNC scheme approaches the capacity upper bound as the number of user antennas increases towards infinity. For a finite number of user antennas, we formulate the design criterion of the optimal EDA precoder and present solutions. Numerical results show that there is only a marginal gap between the achievable rate of the proposed EDA-PNC scheme and the capacity upper bound of the MIMO TWRC, in ...

  2. European Code against Cancer 4th Edition: Physical activity and cancer.

    Science.gov (United States)

    Leitzmann, Michael; Powers, Hilary; Anderson, Annie S; Scoccianti, Chiara; Berrino, Franco; Boutron-Ruault, Marie-Christine; Cecchini, Michele; Espina, Carolina; Key, Timothy J; Norat, Teresa; Wiseman, Martin; Romieu, Isabelle

    2015-12-01

    Physical activity is a complex, multidimensional behavior, the precise measurement of which is challenging in free-living individuals. Nonetheless, representative survey data show that 35% of the European adult population is physically inactive. Inadequate levels of physical activity are disconcerting given substantial epidemiologic evidence showing that physical activity is associated with decreased risks of colon, endometrial, and breast cancers. For example, insufficient physical activity levels are thought to cause 9% of breast cancer cases and 10% of colon cancer cases in Europe. By comparison, the evidence for a beneficial effect of physical activity is less consistent for cancers of the lung, pancreas, ovary, prostate, kidney, and stomach. The biologic pathways underlying the association between physical activity and cancer risk are incompletely defined, but potential etiologic pathways include insulin resistance, growth factors, adipocytokines, steroid hormones, and immune function. In recent years, sedentary behavior has emerged as a potential independent determinant of cancer risk. In cancer survivors, physical activity has shown positive effects on body composition, physical fitness, quality of life, anxiety, and self-esteem. Physical activity may also carry benefits regarding cancer survival, but more evidence linking increased physical activity to prolonged cancer survival is needed. Future studies using new technologies - such as accelerometers and e-tools - will contribute to improved assessments of physical activity. Such advancements in physical activity measurement will help clarify the relationship between physical activity and cancer risk and survival. Taking the overall existing evidence into account, the fourth edition of the European Code against Cancer recommends that people be physically active in everyday life and limit the time spent sitting. Copyright © 2015 International Agency for Research on Cancer. Published by Elsevier Ltd. All

  3. Physics Based Model for Cryogenic Chilldown and Loading. Part IV: Code Structure

    Science.gov (United States)

    Luchinsky, D. G.; Smelyanskiy, V. N.; Brown, B.

    2014-01-01

    This is the fourth report in a series of technical reports that describe separated two-phase flow model application to the cryogenic loading operation. In this report we present the structure of the code. The code consists of five major modules: (1) geometry module; (2) solver; (3) material properties; (4) correlations; and finally (5) stability control module. The two key modules - solver and correlations - are further divided into a number of submodules. Most of the physics and knowledge databases related to the properties of cryogenic two-phase flow are included into the cryogenic correlations module. The functional form of those correlations is not well established and is a subject of extensive research. Multiple parametric forms for various correlations are currently available. Some of them are included into correlations module as will be described in details in a separate technical report. Here we describe the overall structure of the code and focus on the details of the solver and stability control modules.

  4. User`s guide and physics manual for the SCATPlus circuit code

    Energy Technology Data Exchange (ETDEWEB)

    Yapuncich, M.L.; Deninger, W.J.; Gribble, R.F.

    1994-05-09

    ScatPlus is a user friendly circuit code and an expandable library of circuit models for electrical components and devices; it can be used to predict the transient behavior in electric circuits. The heart of ScatPlus is the transient circuit solver SCAT written in 1986 by R.F. Gribble. This manual includes system requirements, physics manual, ScatPlus component library, tutorial, ScatPlus screen, menus and toolbar, ScatPlus tool bar, procedures.

  5. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  6. Performance Characteristics of HYDRA - a Multi-Physics simulation code from Lawrence Livermore National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Steven H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Karlin, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Marinak, Marty M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-01-09

    HYDRA is used to simulate a variety of experiments carried out at the National Ignition Facility (NIF) [4] and other high energy density physics facilities. HYDRA has packages to simulate radiation transfer, atomic physics, hydrodynamics, laser propagation, and a number of other physics effects. HYDRA has over one million lines of code and includes both MPI and thread-level (OpenMP and pthreads) parallelism. This paper measures the performance characteristics of HYDRA using hardware counters on an IBM BlueGene/Q system. We report key ratios such as bytes/instruction and memory bandwidth for several different physics packages. The total number of bytes read and written per time step is also reported. We show that none of the packages which use significant time are memory bandwidth limited on a Blue Gene/Q. HYDRA currently issues very few SIMD instructions. The pressure on memory bandwidth will increase if high levels of SIMD instructions can be achieved.

  7. QR-codes as a tool to increase physical activity level among school children during class hours

    DEFF Research Database (Denmark)

    Christensen, Jeanette Reffstrup; Kristensen, Allan; Bredahl, Thomas Viskum Gjelstrup

    QR-codes as a tool to increase physical activity level among school children during class hours Introduction: Danish students are no longer fulfilling recommendations for everyday physical activity. Since August 2014, Danish students in public schools are therefore required to be physically active...... for 45 minutes a day during school hours. An experiment implementing QR-codes as a method to increase physical activity during school hours has been tried. The purpose of this study was to examine if the use of QR-codes as a tool for teaching older classes in Danish public schools could increase...... the students physical activity level during class hours. Methods: A before-after study was used to examine 12 students physical activity level, measured with pedometers for six lessons. Three lessons of traditional teaching and three lessons, where QR-codes were used to make orienteering in school area...

  8. Carbon inputs from riparian vegetation limit oxidation of physically bound organic carbon via biochemical and thermodynamic processes

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B.; Tfaily, Malak M.; Crump, Alex R.; Goldman, Amy E.; Bramer, Lisa M.; Arntzen, Evan V.; Romero, Elvira B.; Resch, Charles T.; Kennedy, David W.; Stegen, James C.

    2017-11-20

    In light of increasing terrestrial carbon (C) transport across aquatic boundaries, the mechanisms governing organic carbon (OC) oxidation along terrestrial-aquatic interfaces are crucial to future climate predictions. Here, we investigate biochemistry, metabolic pathways, and thermodynamics corresponding to OC oxidation in the Columbia River corridor. We leverage natural vegetative differences to encompass variation in terrestrial C inputs. Our results suggest that decreases in terrestrial C deposition associated with diminished riparian vegetation induce oxidation of physically-bound (i.e., mineral and microbial) OC at terrestrial-aquatic interfaces. We also find that contrasting metabolic pathways oxidize OC in the presence and absence of vegetation and—in direct conflict with the concept of ‘priming’—that inputs of water-soluble and thermodynamically-favorable terrestrial OC protects bound-OC from oxidation. Based on our results, we propose a mechanistic conceptualization of OC oxidation along terrestrial-aquatic interfaces that can be used to model heterogeneous patterns of OC loss under changing land cover distributions.

  9. Briefing Book for the Zeuthen Workshop, v.2 Input received from the particle physics community, funding agencies, and other resources

    CERN Document Server

    CERN. Geneva. Council Strategy Group; Aleksan, Roy; Bertolucci, Sergio; Blondel, A; Cavalli-Sforza, M; Heuer, R D; Linde, Frank L; Mangano, Michelangelo L; Peach, Kenneth J; Rondio, Ewa; Webber, Bryan R

    2006-01-01

    On Jun 18th 2004, the CERN Council, upon the initiative of its President, Prof. Enzo Iarocci, established an ad hoc scientific advisory group (the Strategy Group), to produce a draft strategy for European particle physics, which is to be considered by a special meeting of the CERN Council, to be held in Lisbon on Jul 14th 2006. There are three volumes to the Briefing Book. This second volume contains input that the Preparatory Group has received. The structure of this volume of the Briefing Book is summarised here. In the following chapter we collect the documents received as input to the Strategy Group from individual scientists, collaborations, working groups, etc. Most of these documents were submitted before the Orsay Open Symposium, and contributed to the material presented by the Symposium speakers, and to the ensuing discussions. They are reproduced here unedited, and grouped by topic following the chapter subdivision of Briefing Book 1, Part 1. Chapter 3 presents contributions received from national s...

  10. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    Science.gov (United States)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  11. The bidimensional neutron transport code TWOTRAN-GG. Users manual and input data TWOTRAN-TRACA version; El codigo de transporte bidimensional TWOTRAN-GG. Manual de usuario y datos de entrada version TWOTRAN-TRACA

    Energy Technology Data Exchange (ETDEWEB)

    Ahnert, C.; Aragones, J. M.

    1981-07-01

    This Is a users manual of the neutron transport code TWOTRAN-TRACA, which is a version of the original TWOTRAN-GG from the Los Alamos Laboratory, with some modifications made at JEN. A detailed input data description is given as well as the new modifications developed at JEN. (Author) 8 refs.

  12. Applications of FLUKA Monte Carlo Code for Nuclear and Accelerator Physics

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, Giuseppe; /INFN, Milan /Milan U.; Broggi, Francesco; /INFN, Milan /Milan U.; Brugger, Markus; /CERN; Campanella, Mauro; /INFN, Milan /Milan U.; Carboni, Massimo; /INFN, Legnaro; Empl, Anton; /Houston U.; Fasso, Alberto; /SLAC; Gadioli, Ettore; /INFN, Milan /Milan U.; Cerutti, Francesco; /CERN; Ferrari, Alfredo; /CERN; Ferrari, Anna; /Frascati; Lantz, Matthias; /Nishina Ctr., RIKEN; Mairani, Andrea; /INFN, Milan /Milan U.; Margiotta, M.; /INFN, Bologna /Bologna U.; Morone, Christina; /Rome U.,Tor Vergata /INFN, Rome2; Muraro, Silvia; /INFN, Milan /Milan U.; Parodi, Katerina; /HITS, Heidelberg; Patera, Vincenzo; /Frascati; Pelliccioni, Maurizio; /Frascati; Pinsky, Lawrence; /Houston U.; Ranft, Johannes; /Siegen U. /CERN /Seibersdorf, Reaktorzentrum /INFN, Milan /Milan U. /SLAC /INFN, Legnaro /INFN, Bologna /Bologna U. /CERN /HITS, Heidelberg /CERN /CERN /Frascati /CERN /CERN /CERN /CERN /NASA, Houston

    2012-04-17

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such topics as accelerator related applications.

  13. Applications of FLUKA Monte Carlo code for nuclear and accelerator physics

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, Giuseppe; Broggi, Francesco [INFN and University of Milano, Milano (Italy); Brugger, Markus [CERN, 1211 Geneva 23 (Switzerland); Campanella, Mauro [INFN and University of Milano, Milano (Italy); Carboni, Massimo [INFN, Legnaro (Italy); Empl, Anton [University of Houston, Houston (United States); Fasso, Alberto [SLAC, Stanford (United States); Gadioli, Ettore [INFN and University of Milano, Milano (Italy); Cerutti, Francesco; Ferrari, Alfredo [CERN, 1211 Geneva 23 (Switzerland); Ferrari, Anna [INFN, Frascati (Italy); Lantz, Matthias [Riken Laboratory (Japan); Mairani, Andrea [INFN and University of Milano, Milano (Italy); Margiotta, M. [INFN and University Bologna, Bologna (Italy); Morone, Cristina [INFN and University of Roma II, Roma (Italy); Muraro, Silvia [INFN and University of Milano, Milano (Italy); Parodi, Katia [HIT, Heidelberg (Germany); Patera, Vincenzo; Pelliccioni, Mauricio [INFN, Frascati (Italy); Pinsky, Larry [University of Houston, Houston (United States); and others

    2011-12-15

    FLUKA is a general purpose Monte Carlo code capable of handling all radiation components from thermal energies (for neutrons) or 1 keV (for all other particles) to cosmic ray energies and can be applied in many different fields. Presently the code is maintained on Linux. The validity of the physical models implemented in FLUKA has been benchmarked against a variety of experimental data over a wide energy range, from accelerator data to cosmic ray showers in the Earth atmosphere. FLUKA is widely used for studies related both to basic research and to applications in particle accelerators, radiation protection and dosimetry, including the specific issue of radiation damage in space missions, radiobiology (including radiotherapy) and cosmic ray calculations. After a short description of the main features that make FLUKA valuable for these topics, the present paper summarizes some of the recent applications of the FLUKA Monte Carlo code in the nuclear as well high energy physics. In particular it addresses such topics as accelerator related applications.

  14. Making FLASH an Open Code for the Academic High-Energy Density Physics Community

    Science.gov (United States)

    Lamb, D. Q.; Couch, S. M.; Dubey, A.; Gopal, S.; Graziani, C.; Lee, D.; Weide, K.; Xia, G.

    2010-11-01

    High-energy density physics (HEDP) is an active and growing field of research. DOE has recently decided to make FLASH a code for the academic HEDP community. FLASH is a modular and extensible compressible spatially adaptive hydrodynamics code that incorporates capabilities for a broad range of physical processes, performs well on a wide range of existing advanced computer architectures, and has a broad user base. A rigorous software maintenance process allows the code to operate simultaneously in production and development modes. We summarize the work we are doing to add HEDP capabilities to FLASH. We are adding (1) Spitzer conductivity, (2) super time-stepping to handle the disparity between diffusion and advection time scales, and (3) a description of electrons, ions, and radiation (in the diffusion approximation) by 3 temperatures (3T) to both the hydrodynamics and the MHD solvers. We are also adding (4) ray tracing, (5) laser energy deposition, and (6) a multi-species equation of state incorporating ionization to the hydrodynamics solver; and (7) Hall MHD, and (8) the Biermann battery term to the MHD solver.

  15. Academic Training - The use of Monte Carlo radiation transport codes in radiation physics and dosimetry

    CERN Multimedia

    Françoise Benz

    2006-01-01

    2005-2006 ACADEMIC TRAINING PROGRAMME LECTURE SERIES 27, 28, 29 June 11:00-12:00 - TH Conference Room, bldg. 4 The use of Monte Carlo radiation transport codes in radiation physics and dosimetry F. Salvat Gavalda,Univ. de Barcelona, A. FERRARI, CERN-AB, M. SILARI, CERN-SC Lecture 1. Transport and interaction of electromagnetic radiation F. Salvat Gavalda,Univ. de Barcelona Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interaction models and multiple-scattering theories will be analyzed. Benchmark comparisons of simu...

  16. Development of 2D implicit particle simulation code for ohmic breakdown physics in a tokamak

    Science.gov (United States)

    Yoo, Min-Gu; Lee, Jeongwon; Kim, Young-Gi; Na, Yong-Su

    2017-12-01

    A physical mechanism of an ohmic breakdown in a tokamak has not been clearly understood due to its complexity in physics and geometry especially for a role of space charge in the plasma. We have developed a 2D implicit particle simulation code BREAK, to study the ohmic breakdown physics under a realistic complicated situation considering the space charge and kinetic effects consistently. The ohmic breakdown phenomena span a broad range of spatio-temporal scales, from picoseconds order of the electron gyromotion to milliseconds order of the plasma transport. It is impossible to employ a typical explicit particle simulation method to see the slow plasma transport phenomena of our interest, because a time step size is restricted to be smaller than a period of the electron gyromotion in the explicit scheme. Hence, we adopt several physical and numerical models, such as a toroidally symmetric model and a direct-implicit method, to relax or remove the spatio-temporal restrictions. In addition, coalescence strategies are introduced to control the number of numerical super particles within acceptable ranges to handle the exponentially growing plasma density during the ohmic breakdown. The performance of BREAK is verified with several test cases so that BREAK is expected to be applicable to investigate the ohmic breakdown physics in the tokamak by considering 2-dimensional plasma physics in the RZ plane, self-consistently.

  17. Physical Habitat and Energy Inputs Determine Freshwater Invertebrate Communities in Reference and Cranberry Farm Impacted Northeastern Coastal Zone Streams

    Science.gov (United States)

    Lander, D. M. P.; McCanty, S. T.; Dimino, T. F.; Christian, A. D.

    2016-02-01

    The River Continuum Concept (RCC) predicts stream biological communities based on dominant physical structures and energy inputs into streams and predicts how these features and corresponding communities change along the stream continuum. Verifying RCC expectations is important for creating valid points of comparison during stream ecosystem evaluation. These reference expectations are critical for restoration projects, such as the restoration of decommissioned cranberry bogs. Our research compares the physical habitat and freshwater invertebrate functional feeding groups (FWIFFG) of reference, active cranberry farming, and cranberry farm passive restoration sites in Northeastern Coastal Zone streams of Massachusetts to the specific RCC FWIFFG predictions. We characterized stream physical habitat using a semi-quantitative habitat characterization protocol and sampled freshwater invertebrates using the U.S. EPA standard 20-jab multi-habitat protocol. We expected that stream habitat would be most homogeneous at active farming stations, intermediate at restoration stations, and most heterogeneous at reference stations. Furthermore, we expected reference stream FWIFFG would be accurately predicted by the RCC and distributions at restoration and active sites would vary significantly. Habitat data was analyzed using a principle component analysis and results confirmed our predictions showing more homogeneous habitat for the active and reference stations, while showing a more heterogeneous habitat at the reference stations. The FWIFFG chi-squared analysis showed significant deviation from our specific RCC FWIFFG predictions. Because published FWIFFG distributions did not match our empirical values for a least disturbed Northeastern Coastal Zone headwater stream, using our data as a community structure template for current and future restoration projects is not recommend without further considerations.

  18. Calculation codes in radiation protection, radiation physics and dosimetry; Codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    These scientific days had for objective to draw up the situation of calculation codes of radiation transport, of sources estimation, of radiation doses managements and to draw the future perspectives. (N.C.)

  19. Effects of physics change in Monte Carlo code on electron pencil beam dose distributions

    Energy Technology Data Exchange (ETDEWEB)

    Toutaoui, Abdelkader, E-mail: toutaoui.aek@gmail.com [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Khelassi-Toutaoui, Nadia, E-mail: nadiakhelassi@yahoo.fr [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Brahimi, Zakia, E-mail: zsbrahimi@yahoo.fr [Departement de Physique Medicale, Centre de Recherche Nucleaire d' Alger, 2 Bd Frantz Fanon BP399 Alger RP, Algiers (Algeria); Chami, Ahmed Chafik, E-mail: chafik_chami@yahoo.fr [Laboratoire de Sciences Nucleaires, Faculte de Physique, Universite des Sciences et de la Technologie Houari Boumedienne, BP 32 El Alia, Bab Ezzouar, Algiers (Algeria)

    2012-01-15

    Pencil beam algorithms used in computerized electron beam dose planning are usually described using the small angle multiple scattering theory. Alternatively, the pencil beams can be generated by Monte Carlo simulation of electron transport. In a previous work, the 4th version of the Electron Gamma Shower (EGS) Monte Carlo code was used to obtain dose distributions from monoenergetic electron pencil beam, with incident energy between 1 MeV and 50 MeV, interacting at the surface of a large cylindrical homogeneous water phantom. In 2000, a new version of this Monte Carlo code has been made available by the National Research Council of Canada (NRC), which includes various improvements in its electron-transport algorithms. In the present work, we were interested to see if the new physics in this version produces pencil beam dose distributions very different from those calculated with oldest one. The purpose of this study is to quantify as well as to understand these differences. We have compared a series of pencil beam dose distributions scored in cylindrical geometry, for electron energies between 1 MeV and 50 MeV calculated with two versions of the Electron Gamma Shower Monte Carlo Code. Data calculated and compared include isodose distributions, radial dose distributions and fractions of energy deposition. Our results for radial dose distributions show agreement within 10% between doses calculated by the two codes for voxels closer to the pencil beam central axis, while the differences are up to 30% for longer distances. For fractions of energy deposition, the results of the EGS4 are in good agreement (within 2%) with those calculated by EGSnrc at shallow depths for all energies, whereas a slightly worse agreement (15%) is observed at deeper distances. These differences may be mainly attributed to the different multiple scattering for electron transport adopted in these two codes and the inclusion of spin effect, which produces an increase of the effective range of

  20. GENII-LIN: a Multipurpose Health Physics Code Built on GENII-1.485

    Directory of Open Access Journals (Sweden)

    Marco Sumini

    2006-10-01

    Full Text Available The aim of the GENII-LIN project was to develop an open source multipurpose health physics code running on Linux platform, for calculating radiation dose and risk from radionuclides released to the environment. The general features of the GENII-LIN system include [1] capabilities for calculating radiation dose both for acute and chronic releases, with options for annual dose, committed dose and accumulated dose [2] capabilities for evaluating exposure pathways including direct exposure via water (swimming, boating, fishing, soil (buried and surface sources and air (semi-infinite cloud and finite cloud model, inhalation pathways and ingestion pathways. The release scenarios considered are: - acute release to air, from ground level or elevated sources, or to water; - chronic release to air, from ground level or elevated sources, or to water; - initial contamination of soil or surfaces. Keywords: radiation protection, Linux, health physics, risk analysis.

  1. ARES: A Parallel Discrete Ordinates Transport Code for Radiation Shielding Applications and Reactor Physics Analysis

    Directory of Open Access Journals (Sweden)

    Yixue Chen

    2017-01-01

    Full Text Available ARES is a multidimensional parallel discrete ordinates particle transport code with arbitrary order anisotropic scattering. It can be applied to a wide variety of radiation shielding calculations and reactor physics analysis. ARES uses state-of-the-art solution methods to obtain accurate solutions to the linear Boltzmann transport equation. A multigroup discretization is applied in energy. The code allows multiple spatial discretization schemes and solution methodologies. ARES currently provides diamond difference with or without linear-zero flux fixup, theta weighted, directional theta weighted, exponential directional weighted, and linear discontinuous finite element spatial differencing schemes. Discrete ordinates differencing in angle and spherical harmonics expansion of the scattering source are adopted. First collision source method is used to eliminate or mitigate the ray effects. Traditional source iteration and Krylov iterative method preconditioned with diffusion synthetic acceleration are applied to solve the linear system of equations. ARES uses the Koch-Baker-Alcouffe parallel sweep algorithm to obtain high parallel efficiency. Verification and validation for the ARES transport code system have been done by lots of benchmarks. In this paper, ARES solutions to the HBR-2 benchmark and C5G7 benchmarks are in excellent agreement with published results. Numerical results are presented which demonstrate the accuracy and efficiency of these methods.

  2. Methods, Devices and Computer Program Products Providing for Establishing a Model for Emulating a Physical Quantity Which Depends on at Least One Input Parameter, and Use Thereof

    DEFF Research Database (Denmark)

    2014-01-01

    The present invention proposes methods, devices and computer program products. To this extent, there is defined a set X including N distinct parameter values x_i for at least one input parameter x, N being an integer greater than or equal to 1, first measured the physical quantity Pm1 for each...... of the N distinct parameter values x_i of the at least one input parameter x, while keeping all other input parameters fixed, constructed a Vandermonde matrix VM using the set of N parameter values x_i of the at least one input parameter x, and computed the model W for emulating the physical quantity P...... based on the Vandermonde matrix and the first measured physical quantity according to the equation W=(VMT*VM)-1*VMT*Pm1. The model is iteratively refined so as to obtained a desired emulation precision.; The model can later be used to emulate the physical quantity based on input parameters or logs taken...

  3. On the Way to Future's High Energy Particle Physics Transport Code

    CERN Document Server

    Bíró, Gábor; Futó, Endre

    2015-01-01

    High Energy Physics (HEP) needs a huge amount of computing resources. In addition data acquisition, transfer, and analysis require a well developed infrastructure too. In order to prove new physics disciplines it is required to higher the luminosity of the accelerator facilities, which produce more-and-more data in the experimental detectors. Both testing new theories and detector R&D are based on complex simulations. Today have already reach that level, the Monte Carlo detector simulation takes much more time than real data collection. This is why speed up of the calculations and simulations became important in the HEP community. The Geant Vector Prototype (GeantV) project aims to optimize the most-used particle transport code applying parallel computing and to exploit the capabilities of the modern CPU and GPU architectures as well. With the maximized concurrency at multiple levels the GeantV is intended to be the successor of the Geant4 particle transport code that has been used since two decades succe...

  4. Generation of initial geometries for the simulation of the physical system in the DualPHYsics code; Generacion de geometrias iniciales para la simulacion del sistema fisico en el codigo DualSPHysics

    Energy Technology Data Exchange (ETDEWEB)

    Segura Q, E.

    2013-07-01

    In the diverse research areas of the Instituto Nacional de Investigaciones Nucleares (ININ) are different activities related to science and technology, one of great interest is the study and treatment of the collection and storage of radioactive waste. Therefore at ININ the draft on the simulation of the pollutants diffusion in the soil through a porous medium (third stage) has this problem inherent aspects, hence a need for such a situation is to generate the initial geometry of the physical system For the realization of the simulation method is implemented smoothed particle hydrodynamics (SPH). This method runs in DualSPHysics code, which has great versatility and ability to simulate phenomena of any physical system where hydrodynamic aspects combine. In order to simulate a physical system DualSPHysics code, you need to preset the initial geometry of the system of interest, then this is included in the input file of the code. The simulation sets the initial geometry through regular geometric bodies positioned at different points in space. This was done through a programming language (Fortran, C + +, Java, etc..). This methodology will provide the basis to simulate more complex geometries future positions and form. (Author)

  5. Physical Processes and Applications of the Monte Carlo Radiative Energy Deposition (MRED) Code

    Science.gov (United States)

    Reed, Robert A.; Weller, Robert A.; Mendenhall, Marcus H.; Fleetwood, Daniel M.; Warren, Kevin M.; Sierawski, Brian D.; King, Michael P.; Schrimpf, Ronald D.; Auden, Elizabeth C.

    2015-08-01

    MRED is a Python-language scriptable computer application that simulates radiation transport. It is the computational engine for the on-line tool CRÈME-MC. MRED is based on c++ code from Geant4 with additional Fortran components to simulate electron transport and nuclear reactions with high precision. We provide a detailed description of the structure of MRED and the implementation of the simulation of physical processes used to simulate radiation effects in electronic devices and circuits. Extensive discussion and references are provided that illustrate the validation of models used to implement specific simulations of relevant physical processes. Several applications of MRED are summarized that demonstrate its ability to predict and describe basic physical phenomena associated with irradiation of electronic circuits and devices. These include effects from single particle radiation (including both direct ionization and indirect ionization effects), dose enhancement effects, and displacement damage effects. MRED simulations have also helped to identify new single event upset mechanisms not previously observed by experiment, but since confirmed, including upsets due to muons and energetic electrons.

  6. POPCORN: A comparison of binary population synthesis codes

    Science.gov (United States)

    Claeys, J. S. W.; Toonen, S.; Mennekens, N.

    2013-01-01

    We compare the results of three binary population synthesis codes to understand the differences in their results. As a first result we find that when equalizing the assumptions the results are similar. The main differences arise from deviating physical input.

  7. DNA as a Binary Code: How the Physical Structure of Nucleotide Bases Carries Information

    Science.gov (United States)

    McCallister, Gary

    2005-01-01

    The DNA triplet code also functions as a binary code. Because double-ring compounds cannot bind to double-ring compounds in the DNA code, the sequence of bases classified simply as purines or pyrimidines can encode for smaller groups of possible amino acids. This is an intuitive approach to teaching the DNA code. (Contains 6 figures.)

  8. Application of regional physically-based landslide early warning model: tuning of the input parameters and validation of the results

    Science.gov (United States)

    D'Ambrosio, Michele; Tofani, Veronica; Rossi, Guglielmo; Salvatici, Teresa; Tacconi Stefanelli, Carlo; Rosi, Ascanio; Benedetta Masi, Elena; Pazzi, Veronica; Vannocci, Pietro; Catani, Filippo; Casagli, Nicola

    2017-04-01

    The Aosta Valley region is located in North-West Alpine mountain chain. The geomorphology of the region is characterized by steep slopes, high climatic and altitude (ranging from 400 m a.s.l of Dora Baltea's river floodplain to 4810 m a.s.l. of Mont Blanc) variability. In the study area (zone B), located in Eastern part of Aosta Valley, heavy rainfall of about 800-900 mm per year is the main landslides trigger. These features lead to a high hydrogeological risk in all territory, as mass movements interest the 70% of the municipality areas (mainly shallow rapid landslides and rock falls). An in-depth study of the geotechnical and hydrological properties of hillslopes controlling shallow landslides formation was conducted, with the aim to improve the reliability of deterministic model, named HIRESS (HIgh REsolution Stability Simulator). In particular, two campaigns of on site measurements and laboratory experiments were performed. The data obtained have been studied in order to assess the relationships existing among the different parameters and the bedrock lithology. The analyzed soils in 12 survey points are mainly composed of sand and gravel, with highly variable contents of silt. The range of effective internal friction angle (from 25.6° to 34.3°) and effective cohesion (from 0 kPa to 9.3 kPa) measured and the median ks (10E-6 m/s) value are consistent with the average grain sizes (gravelly sand). The data collected contributes to generate input map of parameters for HIRESS (static data). More static data are: volume weight, residual water content, porosity and grain size index. In order to improve the original formulation of the model, the contribution of the root cohesion has been also taken into account based on the vegetation map and literature values. HIRESS is a physically based distributed slope stability simulator for analyzing shallow landslide triggering conditions in real time and in large areas using parallel computational techniques. The software

  9. A Development of a System Enables Character Input and PC Operation via Voice for a Physically Disabled Person with a Speech Impediment

    Science.gov (United States)

    Tanioka, Toshimasa; Egashira, Hiroyuki; Takata, Mayumi; Okazaki, Yasuhisa; Watanabe, Kenzi; Kondo, Hiroki

    We have designed and implemented a PC operation support system for a physically disabled person with a speech impediment via voice. Voice operation is an effective method for a physically disabled person with involuntary movement of the limbs and the head. We have applied a commercial speech recognition engine to develop our system for practical purposes. Adoption of a commercial engine reduces development cost and will contribute to make our system useful to another speech impediment people. We have customized commercial speech recognition engine so that it can recognize the utterance of a person with a speech impediment. We have restricted the words that the recognition engine recognizes and separated a target words from similar words in pronunciation to avoid misrecognition. Huge number of words registered in commercial speech recognition engines cause frequent misrecognition for speech impediments' utterance, because their utterance is not clear and unstable. We have solved this problem by narrowing the choice of input down in a small number and also by registering their ambiguous pronunciations in addition to the original ones. To realize all character inputs and all PC operation with a small number of words, we have designed multiple input modes with categorized dictionaries and have introduced two-step input in each mode except numeral input to enable correct operation with small number of words. The system we have developed is in practical level. The first author of this paper is physically disabled with a speech impediment. He has been able not only character input into PC but also to operate Windows system smoothly by using this system. He uses this system in his daily life. This paper is written by him with this system. At present, the speech recognition is customized to him. It is, however, possible to customize for other users by changing words and registering new pronunciation according to each user's utterance.

  10. Three new physics realizations of the genetic code and the role of dark matter in bio-systems

    OpenAIRE

    Pitkänen, Matti

    2010-01-01

    TGD inspired quantum biology leads naturally to the idea that several realizations of genetic code exist. Besides the realizations based on temporal patterns of electromagnetic elds I have considered three dierent new physics realizations of the genetic code based the notions of many- sheeted space-time, magnetic body, and the hierarchy of Planck constants explaining dark matter in TGD framework. 1. The rst realization - proposed in the model for DNA as topological quantum comp...

  11. Additions and improvements to the high energy density physics capabilities in the FLASH code

    Science.gov (United States)

    Lamb, D.; Bogale, A.; Feister, S.; Flocke, N.; Graziani, C.; Khiar, B.; Laune, J.; Tzeferacos, P.; Walker, C.; Weide, K.

    2017-10-01

    FLASH is an open-source, finite-volume Eulerian, spatially-adaptive radiation magnetohydrodynamics code that has the capabilities to treat a broad range of physical processes. FLASH performs well on a wide range of computer architectures, and has a broad user base. Extensive high energy density physics (HEDP) capabilities exist in FLASH, which make it a powerful open toolset for the academic HEDP community. We summarize these capabilities, emphasizing recent additions and improvements. We describe several non-ideal MHD capabilities that are being added to FLASH, including the Hall and Nernst effects, implicit resistivity, and a circuit model, which will allow modeling of Z-pinch experiments. We showcase the ability of FLASH to simulate Thomson scattering polarimetry, which measures Faraday due to the presence of magnetic fields, as well as proton radiography, proton self-emission, and Thomson scattering diagnostics. Finally, we describe several collaborations with the academic HEDP community in which FLASH simulations were used to design and interpret HEDP experiments. This work was supported in part at U. Chicago by DOE NNSA ASC through the Argonne Institute for Computing in Science under FWP 57789; DOE NNSA under NLUF Grant DE-NA0002724; DOE SC OFES Grant DE-SC0016566; and NSF Grant PHY-1619573.

  12. Reactor physics modelling of accident tolerant fuel for LWRs using ANSWERS codes

    Directory of Open Access Journals (Sweden)

    Lindley Benjamin A.

    2016-01-01

    Full Text Available The majority of nuclear reactors operating in the world today and similarly the majority of near-term new build reactors will be LWRs. These currently accommodate traditional Zr clad UO2/PuO2 fuel designs which have an excellent performance record for normal operation. However, the events at Fukushima culminated in significant hydrogen production and hydrogen explosions, resulting from high temperature Zr/steam interaction following core uncovering for an extended period. These events have resulted in increased emphasis towards developing more accident tolerant fuels (ATFs-clad systems, particularly for current and near-term build LWRs. R&D programmes are underway in the US and elsewhere to develop ATFs and the UK is engaging in these international programmes. Candidate advanced fuel materials include uranium nitride (UN and uranium silicide (U3Si2. Candidate cladding materials include advanced stainless steel (FeCrAl and silicon carbide. The UK has a long history in industrial fuel manufacture and fabrication for a wide range of reactor systems including LWRs. This is supported by a national infrastructure to perform experimental and theoretical R&D in fuel performance, fuel transient behaviour and reactor physics. In this paper, an analysis of the Integral Inherently Safe LWR design (I2S-LWR, a reactor concept developed by an international collaboration led by the Georgia Institute of Technology, within a US DOE Nuclear Energy University Program (NEUP Integrated Research Project (IRP is considered. The analysis is performed using the ANSWERS reactor physics code WIMS and the EDF Energy core simulator PANTHER by researchers at the University of Cambridge. The I2S-LWR is an advanced 2850 MWt integral PWR with inherent safety features. In order to enhance the safety features, the baseline fuel and cladding materials that were chosen for the I2S-LWR design are U3Si2 and advanced stainless steel respectively. In addition, the I2S-LWR design

  13. Supercomputing with TOUGH2 family codes for coupled multi-physics simulations of geologic carbon sequestration

    Science.gov (United States)

    Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.

    2015-12-01

    Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent

  14. Experimental benchmark of non-local-thermodynamic-equilibrium plasma atomic physics codes; Validation experimentale des codes de physique atomique des plasmas hors equilibre thermodynamique local

    Energy Technology Data Exchange (ETDEWEB)

    Nagels-Silvert, V

    2004-09-15

    The main purpose of this thesis is to get experimental data for the testing and validation of atomic physics codes dealing with non-local-thermodynamical-equilibrium plasmas. The first part is dedicated to the spectroscopic study of xenon and krypton plasmas that have been produced by a nanosecond laser pulse interacting with a gas jet. A Thomson scattering diagnostic has allowed us to measure independently plasma parameters such as electron temperature, electron density and the average ionisation state. We have obtained time integrated spectra in the range between 5 and 10 angstroms. We have identified about one hundred xenon rays between 8.6 and 9.6 angstroms via the use of the Relac code. We have discovered unknown rays for the krypton between 5.2 and 7.5 angstroms. In a second experiment we have extended the wavelength range to the X UV domain. The Averroes/Transpec code has been tested in the ranges from 9 to 15 angstroms and from 10 to 130 angstroms, the first range has been well reproduced while the second range requires a more complex data analysis. The second part is dedicated to the spectroscopic study of aluminium, selenium and samarium plasmas in femtosecond operating rate. We have designed an interferometry diagnostic in the frequency domain that has allowed us to measure the expanding speed of the target's backside. Via the use of an adequate isothermal model this parameter has led us to know the plasma electron temperature. Spectra and emission times of various rays from the aluminium and selenium plasmas have been computed satisfactorily with the Averroes/Transpec code coupled with Film and Multif hydrodynamical codes. (A.C.)

  15. V.S.O.P. (99/09) computer code system for reactor physics and fuel cycle simulation. Version 2009

    Energy Technology Data Exchange (ETDEWEB)

    Ruetten, H.J.; Haas, K.A.; Brockmann, H.; Ohlig, U.; Pohl, C.; Scherer, W.

    2010-07-15

    V.S.O.P. (99/ 09) represents the further development of V.S.O.P. (99/ 05). Compared to its precursor, the code system has been improved again in many details. The main motivation for this new code version was to update the basic nuclear libraries used by the code system. Thus, all cross section libraries involved in the code have now been based on ENDF/B-VII. V.S.O.P. is a computer code system for the comprehensive numerical simulation of the physics of thermal reactors. It implies the setup of the reactor and of the fuel element, processing of cross sections, neutron spectrum evaluation, neutron diffusion calculation in two or three dimensions, fuel burnup, fuel shuffling, reactor control, thermal hydraulics and fuel cycle costs. The thermal hydraulics part (steady state and time-dependent) is restricted to gas-cooled reactors and to two spatial dimensions. The code can simulate the reactor operation from the initial core towards the equilibrium core. This latest code version was developed and tested under the WINDOWS-XP - operating system. (orig.)

  16. Experimental physics and industrial control system (EPICS) input/output controller (IOC) application developer`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Kraimer, M.R.

    1994-05-01

    This document describes the core software that resides in an Input/Output Controller (IOC), one of the major components of EPICS. The plan of the book is: EPICS overview, IOC test facilities, general purpose features, database locking - scanning - and processing, static database access, runtime database access, database scanning, record and device support, device support library, IOC database configuration, IOC initialization, and database structures. Other than the first chapter this document describes only core IOC software. Thus it does not describe other EPICS tools such as the sequencer. It also does not describe Channel Access, a major IOC component.

  17. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto, E-mail: alan@lmp.ufrj.br, E-mail: andressa@lmp.ufrj.br, E-mail: schirru@lmp.ufrj.br, E-mail: ffreire@eletronuclear.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Eletrobras Termonuclear S.A. (ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10{sup 13} combinations and 10{sup 11} great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  18. Earlier Physical Therapy Input Is Associated With a Reduced Length of Hospital Stay and Reduced Care Needs on Discharge in Frail Older Inpatients: An Observational Study.

    Science.gov (United States)

    Hartley, Peter J; Keevil, Victoria L; Alushi, Ledia; Charles, Rebecca L; Conroy, Eimear B; Costello, Patricia M; Dixon, Becki; Dolinska-Grzybek, Aida M; Vajda, Diana; Romero-Ortuno, Roman

    2017-06-16

    Pressures on hospital bed occupancy in the English National Health Service have focused attention on enhanced service delivery models and methods by which physical therapists might contribute to effective cost savings, while retaining a patient-centered approach. Earlier access to physical therapy may lead to better outcomes in frail older inpatients, but this has not been well studied in acute National Health Service hospitals. Our aim was to retrospectively study the associations between early physical therapy input and length of hospital stay (LOS), functional outcomes, and care needs on discharge. This was a retrospective observational study in a large tertiary university National Health Service hospital in the United Kingdom. We analyzed all admission episodes of people admitted to the department of medicine for the elderly wards for more than 3 months in 2016. Patients were categorized into 2 groups: those examined by a physical therapist within 24 hours of admission and those examined after 24 hours of admission.The outcome variables were as follows: LOS (days), functional measures on discharge (Elderly Mobility Scale and walking speed over 6 m), and the requirement of formal care on discharge. Characterization variables on admission were age, gender, existence of a formal care package, preadmission abode, the Clinical Frailty Scale, Charlson Comorbidity Index, the Emergency Department Modified Early Warning Score, C-reactive protein level on admission, and the 4-item version of the Abbreviated Mental Test.The association between the delay to physical therapy input and LOS before discharge home was evaluated using a Cox proportional hazards regression model. There were 1022 hospital episodes during the study period. We excluded 19 who were discharged without being examined by a physical therapist. Of the remaining 1003, 584 (58.2%) were examined within 24 hours of admission (early assessment) and 419 (41.8%) after 24 hours of admission (late assessment

  19. The observation of early childhood physical aggression: A psychometric study of the system for coding early physical aggression (SCEPA)

    NARCIS (Netherlands)

    Mesman, J.; Alink, L.R.A.; van Zeijl, J.; Stolk, M.N.; Bakermans-Kranenburg, M.J.; van IJzendoorn, M.H.; Juffer, F.; Koot, H.M.

    2008-01-01

    We investigated the reliability and (convergent and discriminant) validity of an observational measure of physical aggression in toddlers and preschoolers, originally developed by Keenan and Shaw [1994]. The observation instrument is based on a developmental definition of aggression. Physical

  20. The development and performance of a message-passing version of the PAGOSA shock-wave physics code

    Energy Technology Data Exchange (ETDEWEB)

    Gardner, D.R.; Vaughan, C.T.

    1997-10-01

    A message-passing version of the PAGOSA shock-wave physics code has been developed at Sandia National Laboratories for multiple-instruction, multiple-data stream (MIMD) computers. PAGOSA is an explicit, Eulerian code for modeling the three-dimensional, high-speed hydrodynamic flow of fluids and the dynamic deformation of solids under high rates of strain. It was originally developed at Los Alamos National Laboratory for the single-instruction, multiple-data (SIMD) Connection Machine parallel computers. The performance of Sandia`s message-passing version of PAGOSA has been measured on two MIMD machines, the nCUBE 2 and the Intel Paragon XP/S. No special efforts were made to optimize the code for either machine. The measured scaled speedup (computational time for a single computational node divided by the computational time per node for fixed computational load) and grind time (computational time per cell per time step) show that the MIMD PAGOSA code scales linearly with the number of computational nodes used on a variety of problems, including the simulation of shaped-charge jets perforating an oil well casing. Scaled parallel efficiencies for MIMD PAGOSA are greater than 0.70 when the available memory per node is filled (or nearly filled) on hundreds to a thousand or more computational nodes on these two machines, indicating that the code scales very well. Thus good parallel performance can be achieved for complex and realistic applications when they are first implemented on MIMD parallel computers.

  1. The revised APTA code of ethics for the physical therapist and standards of ethical conduct for the physical therapist assistant: theory, purpose, process, and significance.

    Science.gov (United States)

    Swisher, Laura Lee; Hiller, Peggy

    2010-05-01

    In June 2009, the House of Delegates (HOD) of the American Physical Therapy Association (APTA) passed a major revision of the APTA Code of Ethics for physical therapists and the Standards of Ethical Conduct for the Physical Therapist Assistant. The revised documents will be effective July 1, 2010. The purposes of this article are: (1) to provide a historical, professional, and theoretical context for this important revision; (2) to describe the 4-year revision process; (3) to examine major features of the documents; and (4) to discuss the significance of the revisions from the perspective of the maturation of physical therapy as a doctoring profession. PROCESS OF REVISION: The process for revision is delineated within the context of history and the Bylaws of APTA. FORMAT, STRUCTURE, AND CONTENT OF REVISED CORE ETHICS DOCUMENTS: The revised documents represent a significant change in format, level of detail, and scope of application. Previous APTA Codes of Ethics and Standards of Ethical Conduct for the Physical Therapist Assistant have delineated very broad general principles, with specific obligations spelled out in the Ethics and Judicial Committee's Guide for Professional Conduct and Guide for Conduct of the Physical Therapist Assistant. In contrast to the current documents, the revised documents address all 5 roles of the physical therapist, delineate ethical obligations in organizational and business contexts, and align with the tenets of Vision 2020. The significance of this revision is discussed within historical parameters, the implications for physical therapists and physical therapist assistants, the maturation of the profession, societal accountability and moral community, potential regulatory implications, and the inclusive and deliberative process of moral dialogue by which changes were developed, revised, and approved.

  2. Updates to the Generation of Physics Data Inputs for MAMMOTH Simulations of the Transient Reactor Test Facility - FY2016

    Energy Technology Data Exchange (ETDEWEB)

    Ortensi, Javier [Idaho National Lab. (INL), Idaho Falls, ID (United States); Baker, Benjamin Allen [Idaho National Lab. (INL), Idaho Falls, ID (United States); Schunert, Sebastian [Idaho National Lab. (INL), Idaho Falls, ID (United States); Wang, Yaqi [Idaho National Lab. (INL), Idaho Falls, ID (United States); Gleicher, Frederick Nathan [Idaho National Lab. (INL), Idaho Falls, ID (United States); DeHart, Mark David [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-06-01

    The INL is currently evolving the modeling and simulation (M&S) capability that will enable improved core operation as well as design and analysis of TREAT experiments. This M&S capability primarily uses MAMMOTH, a reactor physics application being developed under Multi-physics Object Oriented Simulation Environment (MOOSE) framework. MAMMOTH allows the coupling of a number of other MOOSE-based applications. This second year of work has been devoted to the generation of a deterministic reference solution for the full core, the preparation of anisotropic diffusion coefficients, the testing of the SPH equivalence method, and the improvement of the control rod modeling. In addition, this report includes the progress made in the modeling of the M8 core configuration and experiment vehicle since January of this year.

  3. Applying physical input-output tables of energy to estimate the energy ecological footprint (EEF) of Galicia (NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Carballo Penela, Adolfo; Sebastian Villasante, Carlos [Fisheries Economics and Natural Resources Research Group, Department of Applied Economics, University of Santiago de Compostela, Faculty of Economics and Business Administration, Avenida Burgo das Nacions s/n. CP. 15782 Santiago de Compostela, A Coruna Galicia (Spain)

    2008-03-15

    Nowadays, the achievement of sustainable development constitutes an important constraint in the design of energy policies, being necessary the development of reliable indicators to obtain helpful information about the use of energy resources. The ecological footprint (EF) provides a referential framework for the analysis of human demand for bioproductivity, including energy issues. In this article, the theoretical bases of the footprint analysis are described by applying input-output tables of energy to estimate the Galician energy ecological footprint (EEF). It is concluded that the location of highly polluting industries in Galicia makes the Galician EEF quite higher than more developed regions of Spain. The relevance of the outer component of the Galician EEF is also studied. First, available information seems to indicate that the energy incorporated to the trading of manufactured goods would notably increase the Galician consumption of energy. On the other hand, the inclusion of electricity trade in the EEF analysis, including an adjustment, following the same philosophy as with manufactured goods is proposed. This adjustment would substantially reduce the Galician EEF, as the exported electricity widely exceeds the imported one. (author)

  4. Modeling the ionosphere-thermosphere response to a geomagnetic storm using physics-based magnetospheric energy input: OpenGGCM-CTIM results

    Directory of Open Access Journals (Sweden)

    Connor Hyunju Kim

    2016-01-01

    Full Text Available The magnetosphere is a major source of energy for the Earth’s ionosphere and thermosphere (IT system. Current IT models drive the upper atmosphere using empirically calculated magnetospheric energy input. Thus, they do not sufficiently capture the storm-time dynamics, particularly at high latitudes. To improve the prediction capability of IT models, a physics-based magnetospheric input is necessary. Here, we use the Open Global General Circulation Model (OpenGGCM coupled with the Coupled Thermosphere Ionosphere Model (CTIM. OpenGGCM calculates a three-dimensional global magnetosphere and a two-dimensional high-latitude ionosphere by solving resistive magnetohydrodynamic (MHD equations with solar wind input. CTIM calculates a global thermosphere and a high-latitude ionosphere in three dimensions using realistic magnetospheric inputs from the OpenGGCM. We investigate whether the coupled model improves the storm-time IT responses by simulating a geomagnetic storm that is preceded by a strong solar wind pressure front on August 24, 2005. We compare the OpenGGCM-CTIM results with low-earth-orbit satellite observations and with the model results of Coupled Thermosphere-Ionosphere-Plasmasphere electrodynamics (CTIPe. CTIPe is an up-to-date version of CTIM that incorporates more IT dynamics such as a low-latitude ionosphere and a plasmasphere, but uses empirical magnetospheric input. OpenGGCM-CTIM reproduces localized neutral density peaks at ~ 400 km altitude in the high-latitude dayside regions in agreement with in situ observations during the pressure shock and the early phase of the storm. Although CTIPe is in some sense a much superior model than CTIM, it misses these localized enhancements. Unlike the CTIPe empirical input models, OpenGGCM-CTIM more faithfully produces localized increases of both auroral precipitation and ionospheric electric fields near the high-latitude dayside region after the pressure shock and after the storm onset

  5. The basics of coding a rehabilitation diagnosis in clinical practice for the physical therapist.

    OpenAIRE

    Romanyshyn M.J.

    2012-01-01

    Directions of the use international classification of functioning are considered, limitations of vital functions and health in clinical activity of physical physical therapist. Bases for the construction of rehabilitation diagnosis in clinical practice are shown. The analysis of publications of Worldwide organization of health protection and World confederation of physical therapy is presented. The necessity of the use of foregoing classification for clinical practice of physical therapist is...

  6. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  7. The physics and technology basis entering European system code studies for DEMO

    NARCIS (Netherlands)

    Wenninger, R.; Kembleton, R.; Bachmann, C.; Biel, W.; Bolzonella, T.; Ciattaglia, S.; Cismondi, F.; Coleman, M.; Donne, A. J. H.; Eich, T.; Fable, E.; Federici, G.; Franke, T.; Lux, H.; Maviglia, F.; Meszaros, B.; Putterich, T.; Saarelma, S.; Snickers, A.; Villone, F.; Vincenzi, P.; Wolff, D.; Zohm, H.

    2017-01-01

    A large scale program to develop a conceptual design for a demonstration fusion power plant (DEMO) has been initiated in Europe. Central elements are the baseline design points, which are developed by system codes. The assessment of the credibility of these design points is often hampered by missing

  8. Test tasks for verification of program codes for calculation of neutron-physical characteristics of the BN series reactors

    Science.gov (United States)

    Tikhomirov, Georgy; Ternovikh, Mikhail; Smirnov, Anton; Saldikov, Ivan; Bahdanovich, Rynat; Gerasimov, Alexander

    2017-09-01

    System of test tasks is presented with the fast reactor BN-1200 with nitride fuel as prototype. The system of test tasks includes three test based on different geometric models. Model of fuel element in homogeneous and in heterogeneous form, model of fuel assembly in height-heterogeneous and full heterogeneous form, and modeling of the active core of BN-1200 reactor. Cross-verification of program codes was performed. Transition from simple geometry to more complex one allows to identify the causes of discrepancies in the results during the early stage of cross-verification of codes. This system of tests can be applied for certification of engineering programs based on the method of Monte Carlo to the calculation of full-scale models of the reactor core of the BN series. The developed tasks take into account the basic layout and structural features of the reactor BN-1200. They are intended for study of neutron-physical characteristics, estimation of influence of heterogeneous structure and influence of diffusion approximation. The development of system of test tasks allowed to perform independent testing of programs for calculation of neutron-physical characteristics: engineering programs JARFR and TRIGEX, and codes MCU, TDMCC, and MMK based on the method of Monte Carlo.

  9. Relative validity of the pre-coded food diary used in the Danish National Survey of Diet and Physical Activity

    DEFF Research Database (Denmark)

    Knudsen, Vibeke Kildegaard; Gille, Maj-Britt; Nielsen, Trine Holmgaard

    2011-01-01

    Objective: To determine the relative validity of the pre-coded food diary applied in the Danish National Survey of Dietary Habits and Physical Activity. Design: A cross-over study among seventy-two adults (aged 20 to 69 years) recording diet by means of a pre-coded food diary over 4 d and a 4 d...... weighed food record. Intakes of foods and drinks were estimated, and nutrient intakes were calculated. Means and medians of intake were compared, and crossclassification of individuals according to intake was performed. To assess agreement between the two methods, Pearson and Spearman’s correlation......, and intakes of fruit, coffee and tea were lower, in the weighed food record compared with the food diary. Intakes of nutrients were grossly the same in the two methods, except for protein, where a higher intake was recorded in the weighed record. In general, moderate agreement between the two methods...

  10. The use of Monte Carlo radiation transport codes in radiation physics and dosimetry

    CERN Multimedia

    CERN. Geneva; Ferrari, Alfredo; Silari, Marco

    2006-01-01

    Transport and interaction of electromagnetic radiation Interaction models and simulation schemes implemented in modern Monte Carlo codes for the simulation of coupled electron-photon transport will be briefly reviewed. In these codes, photon transport is simulated by using the detailed scheme, i.e., interaction by interaction. Detailed simulation is easy to implement, and the reliability of the results is only limited by the accuracy of the adopted cross sections. Simulations of electron and positron transport are more difficult, because these particles undergo a large number of interactions in the course of their slowing down. Different schemes for simulating electron transport will be discussed. Condensed algorithms, which rely on multiple-scattering theories, are comparatively fast, but less accurate than mixed algorithms, in which hard interactions (with energy loss or angular deflection larger than certain cut-off values) are simulated individually. The reliability, and limitations, of electron-interacti...

  11. First experience with particle-in-cell plasma physics code on ARM-based HPC systems

    Science.gov (United States)

    Sáez, Xavier; Soba, Alejandro; Sánchez, Edilberto; Mantsinen, Mervi; Mateo, Sergi; Cela, José M.; Castejón, Francisco

    2015-09-01

    In this work, we will explore the feasibility of porting a Particle-in-cell code (EUTERPE) to an ARM multi-core platform from the Mont-Blanc project. The used prototype is based on a system-on-chip Samsung Exynos 5 with an integrated GPU. It is the first prototype that could be used for High-Performance Computing (HPC), since it supports double precision and parallel programming languages.

  12. Beamforming-Based Physical Layer Network Coding for Non-Regenerative Multi-Way Relaying

    Directory of Open Access Journals (Sweden)

    Klein Anja

    2010-01-01

    Full Text Available We propose non-regenerative multi-way relaying where a half-duplex multi-antenna relay station (RS assists multiple single-antenna nodes to communicate with each other. The required number of communication phases is equal to the number of the nodes, N. There are only one multiple-access phase, where the nodes transmit simultaneously to the RS, and broadcast (BC phases. Two transmission methods for the BC phases are proposed, namely, multiplexing transmission and analog network coded transmission. The latter is a cooperation method between the RS and the nodes to manage the interference in the network. Assuming that perfect channel state information is available, the RS performs transceive beamforming to the received signals and transmits simultaneously to all nodes in each BC phase. We address the optimum transceive beamforming maximising the sum rate of non-regenerative multi-way relaying. Due to the nonconvexity of the optimization problem, we propose suboptimum but practical signal processing schemes. For multiplexing transmission, we propose suboptimum schemes based on zero forcing, minimising the mean square error, and maximising the signal to noise ratio. For analog network coded transmission, we propose suboptimum schemes based on matched filtering and semidefinite relaxation of maximising the minimum signal to noise ratio. It is shown that analog network coded transmission outperforms multiplexing transmission.

  13. "Friluftsliv": A Contribution to Equity and Democracy in Swedish Physical Education? An Analysis of Codes in Swedish Physical Education Curricula

    Science.gov (United States)

    Backman, Erik

    2011-01-01

    During the last decade, expanding research investigating the school subject Physical Education (PE) indicates a promotion of inequalities regarding which children benefit from PE teaching. Outdoor education and its Scandinavian equivalent "friluftsliv," is a part of the PE curriculum in many countries, and these practices have been…

  14. The effect of variations in the input physics on the cosmic distribution of metals predicted by simulations

    Science.gov (United States)

    Wiersma, Robert P. C.; Schaye, Joop; Theuns, Tom

    2011-07-01

    We investigate how a range of physical processes affect the cosmic metal distribution using a suite of cosmological, hydrodynamical simulations. Focusing on redshifts z= 0 and 2, we study the metallicities and metal mass fractions for stars as well as for the interstellar medium (ISM), and several more diffuse gas phases. We vary the cooling rates, star formation law, structure of the ISM, properties of galactic winds, feedback from active galactic nuclei (AGN), supernova type Ia time delays, reionization, stellar initial mass function and cosmology. In all models stars and the warm-hot intergalactic medium (WHIM) constitute the dominant repository of metals, while for z≳ 2 the ISM is also important. In models with galactic winds, predictions for the metallicities of the various phases vary at the factor of 2 level and are broadly consistent with observations. The exception is the cold-warm intergalactic medium (IGM), whose metallicity varies at the order of magnitude level if the prescription for galactic winds is varied, even for a fixed wind energy per unit stellar mass formed, and falls far below the observed values if winds are not included. At the other extreme, the metallicity of the intracluster medium (ICM) is largely insensitive to the presence of galactic winds, indicating that its enrichment is regulated by other processes. The mean metallicities of stars (˜Z⊙), the ICM (˜10-1 Z⊙), and the WHIM (˜10-1 Z⊙) evolve only slowly, while those of the cold halo gas and the IGM increase by more than an order of magnitude from z= 5-0. Higher velocity outflows are more efficient at transporting metals to low densities, but actually predict lower metallicities for the cold-warm IGM since the winds shock-heat the gas to high temperatures, thereby increasing the fraction of the metals residing in, but not the metallicity of, the WHIM. Besides galactic winds driven by feedback from star formation, the metal distribution is most sensitive to the inclusion of

  15. Carbon Inputs From Riparian Vegetation Limit Oxidation of Physically Bound Organic Carbon Via Biochemical and Thermodynamic Processes: OC Oxidation Processes Across Vegetation

    Energy Technology Data Exchange (ETDEWEB)

    Graham, Emily B. [Pacific Northwest National Laboratory, Richland WA USA; Tfaily, Malak M. [Environmental Molecular Sciences Laboratory, Richland WA USA; Crump, Alex R. [Pacific Northwest National Laboratory, Richland WA USA; Goldman, Amy E. [Pacific Northwest National Laboratory, Richland WA USA; Bramer, Lisa M. [Pacific Northwest National Laboratory, Richland WA USA; Arntzen, Evan [Pacific Northwest National Laboratory, Richland WA USA; Romero, Elvira [Pacific Northwest National Laboratory, Richland WA USA; Resch, C. Tom [Pacific Northwest National Laboratory, Richland WA USA; Kennedy, David W. [Pacific Northwest National Laboratory, Richland WA USA; Stegen, James C. [Pacific Northwest National Laboratory, Richland WA USA

    2017-12-01

    In light of increasing terrestrial carbon (C) transport across aquatic boundaries, the mechanisms governing organic carbon (OC) oxidation along terrestrial-aquatic interfaces are crucial to future climate predictions. Here, we investigate biochemistry, metabolic pathways, and thermodynamics corresponding to OC oxidation in the Columbia River corridor. We leverage natural vegetative differences to encompass variation in terrestrial C inputs. Our results suggest that decreases in terrestrial C deposition associated with diminished riparian vegetation induce oxidation of physically-bound (i.e., mineral and microbial) OC at terrestrial-aquatic interfaces. We also find that contrasting metabolic pathways oxidize OC in the presence and absence of vegetation and—in direct conflict with the concept of ‘priming’—that inputs of water-soluble and thermodynamically-favorable terrestrial OC protects bound-OC from oxidation. Based on our results, we propose a mechanistic conceptualization of OC oxidation along terrestrial-aquatic interfaces that can be used to model heterogeneous patterns of OC loss under changing land cover distributions.

  16. Comparing urban solid waste recycling from the viewpoint of urban metabolism based on physical input-output model: A case of Suzhou in China.

    Science.gov (United States)

    Liang, Sai; Zhang, Tianzhu

    2012-01-01

    Investigating impacts of urban solid waste recycling on urban metabolism contributes to sustainable urban solid waste management and urban sustainability. Using a physical input-output model and scenario analysis, urban metabolism of Suzhou in 2015 is predicted and impacts of four categories of solid waste recycling on urban metabolism are illustrated: scrap tire recycling, food waste recycling, fly ash recycling and sludge recycling. Sludge recycling has positive effects on reducing all material flows. Thus, sludge recycling for biogas is regarded as an accepted method. Moreover, technical levels of scrap tire recycling and food waste recycling should be improved to produce positive effects on reducing more material flows. Fly ash recycling for cement production has negative effects on reducing all material flows except solid wastes. Thus, other fly ash utilization methods should be exploited. In addition, the utilization and treatment of secondary wastes from food waste recycling and sludge recycling should be concerned. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. A Youth Compendium of Physical Activities: Activity Codes and Metabolic Intensities

    Science.gov (United States)

    BUTTE, NANCY F.; WATSON, KATHLEEN B.; RIDLEY, KATE; ZAKERI, ISSA F.; MCMURRAY, ROBERT G.; PFEIFFER, KARIN A.; CROUTER, SCOTT E.; HERRMANN, STEPHEN D.; BASSETT, DAVID R.; LONG, ALEXANDER; BERHANE, ZEKARIAS; TROST, STEWART G.; AINSWORTH, BARBARA E.; BERRIGAN, DAVID; FULTON, JANET E.

    2018-01-01

    ABSTRACT Purpose A Youth Compendium of Physical Activities (Youth Compendium) was developed to estimate the energy costs of physical activities using data on youth only. Methods On the basis of a literature search and pooled data of energy expenditure measurements in youth, the energy costs of 196 activities were compiled in 16 activity categories to form a Youth Compendium of Physical Activities. To estimate the intensity of each activity, measured oxygen consumption (V˙O2) was divided by basal metabolic rate (Schofield age-, sex-, and mass-specific equations) to produce a youth MET (METy). A mixed linear model was developed for each activity category to impute missing values for age ranges with no observations for a specific activity. Results This Youth Compendium consists of METy values for 196 specific activities classified into 16 major categories for four age-groups, 6–9, 10–12, 13–15, and 16–18 yr. METy values in this Youth Compendium were measured (51%) or imputed (49%) from youth data. Conclusion This Youth Compendium of Physical Activities uses pediatric data exclusively, addresses the age dependency of METy, and imputes missing METy values and thus represents advancement in physical activity research and practice. This Youth Compendium will be a valuable resource for stakeholders interested in evaluating interventions, programs, and policies designed to assess and encourage physical activity in youth. PMID:28938248

  18. A new nuclide transport model in soil in the GENII-LIN health physics code

    Science.gov (United States)

    Teodori, F.

    2017-11-01

    The nuclide soil transfer model, originally included in the GENII-LIN software system, was intended for residual contamination from long term activities and from waste form degradation. Short life nuclides were supposed absent or at equilibrium with long life parents. Here we present an enhanced soil transport model, where short life nuclide contributions are correctly accounted. This improvement extends the code capabilities to handle incidental release of contaminant to soil, by evaluating exposure since the very beginning of the contamination event, before the radioactive decay chain equilibrium is reached.

  19. Assessing reactor physics codes capabilities to simulate fast reactors on the example of the BN-600 benchmark

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Vladimir [Scientific and Engineering Centre for Nuclear and Radiation Safety (SES NRS), Moscow (Russian Federation); Bousquet, Jeremy [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) gGmbH, Garching (Germany)

    2016-11-15

    This work aims to assess the capabilities of reactor physics codes (initially validated for thermal reactors) to simulate fast sodium cooled reactors. The BFS-62-3A critical experiment from the BN-600 Hybrid Core Benchmark Analyses was chosen for the investigation. Monte-Carlo codes (KENO from SCALE and SERPENT 2.1.23) and the deterministic diffusion code DYN3D-MG are applied to calculate the neutronic parameters. It was found that the multiplication factor and reactivity effects calculated by KENO and SERPENT using the ENDF/B-VII.0 continuous energy library are in a good agreement with each other and with the measured benchmark values. Few-groups macroscopic cross sections, required for DYN3D-MG, were prepared in applying different methods implemented in SCALE and SERPENT. The DYN3D-MG results of a simplified benchmark show reasonable agreement with results from Monte-Carlo calculations and measured values. The former results are used to justify DYN3D-MG implementation for sodium cooled fast reactors coupled deterministic analysis.

  20. Statistical identification of effective input variables. [SCREEN

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1982-09-01

    A statistical sensitivity analysis procedure has been developed for ranking the input data of large computer codes in the order of sensitivity-importance. The method is economical for large codes with many input variables, since it uses a relatively small number of computer runs. No prior judgemental elimination of input variables is needed. The sceening method is based on stagewise correlation and extensive regression analysis of output values calculated with selected input value combinations. The regression process deals with multivariate nonlinear functions, and statistical tests are also available for identifying input variables that contribute to threshold effects, i.e., discontinuities in the output variables. A computer code SCREEN has been developed for implementing the screening techniques. The efficiency has been demonstrated by several examples and applied to a fast reactor safety analysis code (Venus-II). However, the methods and the coding are general and not limited to such applications.

  1. Synthesis of the scientific French speaking days on numerical codes in radiation protection, in radio physics and in dosimetry; Synthese des journees scientifiques francophones portant sur les codes de calculs en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Paul, D. [Universite de la Mediterranee, Faculte de Pharmacie, Service de Medecine et Sante au Travail, Lab. de Biogenotoxicologie et Mutagenese Environnement (EA 1784-IFR PMSE 112), 13 - Marseille (France); Makovicka, L. [Centre National de la Recherche Scientifique (CNRS), RMA/CREST/FEMTO-ST, UMR 6174, 25 - Montbeliard (France); Ricard, M. [Institut Gustave Roussy, Service de Physique, 94 - Villejuif (France)

    2005-03-01

    Synthesis of the scientific French speaking days on numerical codes in radiation protection, in radio-physics and in dosimetry. The paper carries the title of 'French speaking' scientific days co-organized on October 2-3, 2003 in Sochaux by the SFRP, SFPM and FIRAM societies. It has for objective to establish the scientific balance sheet of this international event, to give the synthesis of current tendencies in the field of the development and of the use of the numerical codes in radiation protection, in radio-physics and in dosimetry. (author)

  2. Processing experimental data and analysis of simulation codes from Nuclear Physics using distributed and parallel computing

    CERN Document Server

    Niculescu, Mihai; Hristov, Peter

    In this thesis we tried to show the impact of new technologies on scientific work in the large field of heavy ion physics and as a case study, we present the implementation of the event plane method, on a highly parallel technology: the graphic processor. By the end of the thesis, a comparison of the analysis results with the elliptic flow published by ALICE is made. In Chapter 1 we presented the computing needs at the heavy ion physics experiment ALICE and showed the current state of software and technologies. The new technologies available for some time, Chapter 2, present new performance capabilities and generated a trend in preparing for the new wave of technologies and software, which most indicators show will dominate the future. This was not disregarded by the scientific community and in consequence section 2.2 shows the rising interest in the new technologies by the High Energy Physics community. A real case study was needed to better understand how the new technologies can be applied in HEP and aniso...

  3. Physics of the Dayside Magnetosphere: New Results From a Hybrid Kinetic Code

    Science.gov (United States)

    Siebeck, D. G.; Omidi, N.

    2007-01-01

    We use a global hybrid code kinetic model to demonstrate how kinetic processes at the bow shock and within the foreshock can dramatically modify the solar wind just before its interaction with the magnetosphere. During periods of steady radial interplanetary magnetic field (IMF) orientation, the foreshock fills with a diffuse population of suprathermal ions. The ions generate cavities marked by enhanced temperatures, depressed densities, and diminished magnetic field strengths that convect antisunward into the bow shock with the solar wind flow. Tangential discontinuities marked by inward-pointing electric fields and normals transverse to the Sun-Earth line generate hot flow anomalies marked by hot tenuous plasmas bounded by outward propagating shocks. When the motional electric field in the magnetosheath points inward towards the Earth, a solitary bow shock appears. For typical IMF orientations, the solitary shocks should appear at poorly sampled high latitudes, but for strongly northward or southward IMF orientations the solitary shocks should appear on the flanks of the magnetosphere. Although quasi-perpendicular, solitary shocks should be marked by turbulent magnetosheath flows, often directed towards the Sun-Earth line, and abrupt spike-like enhancements in the density and magnetic field strength at the shock. Finally,we show how flux transfer events generated between parallel subsolar reconnection lines are destroyed upon encountering the magnetopause at latitudes above the cusp.

  4. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  5. Quasi-optical converters for high-power gyrotrons: a brief review of physical models, numerical methods and computer codes

    Energy Technology Data Exchange (ETDEWEB)

    Sabchevski, S [Institute of Electronics, Bulgarian Academy of Sciences, BG-1784 Sofia (Bulgaria); Zhelyazkov, I [Faculty of Physics, Sofia University, BG-1164 Sofia (Bulgaria); Benova, E [Faculty of Physics, Sofia University, BG-1164 Sofia (Bulgaria); Atanassov, V [Institute of Electronics, Bulgarian Academy of Sciences, BG-1784 Sofia (Bulgaria); Dankov, P [Faculty of Physics, Sofia University, BG-1164 Sofia (Bulgaria); Thumm, M [Forschungszentrum Karlsruhe, Association EURATOM-FZK, Institute for Pulsed Power and Microwave Technology, D-76021 Karlsruhe (Germany); Arnold, A [University of Karlsruhe, Institute of High Frequency Techniques and Electronics, D-76128 Karlsruhe (Germany); Jin, J [Forschungszentrum Karlsruhe, Association EURATOM-FZK, Institute for Pulsed Power and Microwave Technology, D-76021 Karlsruhe (Germany); Rzesnicki, T [Forschungszentrum Karlsruhe, Association EURATOM-FZK, Institute for Pulsed Power and Microwave Technology, D-76021 Karlsruhe (Germany)

    2006-07-15

    Quasi-optical (QO) mode converters are used to transform electromagnetic waves of complex structure and polarization generated in gyrotron cavities into a linearly polarized, Gaussian-like beam suitable for transmission. The efficiency of this conversion as well as the maintenance of low level of diffraction losses are crucial for the implementation of powerful gyrotrons as radiation sources for electron-cyclotron-resonance heating of fusion plasmas. The use of adequate physical models, efficient numerical schemes and up-to-date computer codes may provide the high accuracy necessary for the design and analysis of these devices. In this review, we briefly sketch the most commonly used QO converters, the mathematical base they have been treated on and the basic features of the numerical schemes used. Further on, we discuss the applicability of several commercially available and free software packages, their advantages and drawbacks, for solving QO related problems.

  6. The Rosslyn Code: Can Physics Explain a 500-Year Old Melody Etched in the Walls of a Scottish Chapel?

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Chris [State Magazine

    2011-10-19

    For centuries, historians have puzzled over a series of 213 symbols carved into the stone of Scotland’s Rosslyn Chapel. (Disclaimer: You may recognize this chapel from The Da Vinci Code, but this is real and unrelated!) Several years ago, a composer and science enthusiast noticed that the symbols bore a striking similarity to Chladni patterns, the elegant images that form on a two- dimensional surface when it vibrates at certain frequencies. This man’s theory: A 500-year-old melody was inscribed in the chapel using the language of physics. But not everyone is convinced. Slate senior editor Chris Wilson travelled to Scotland to investigate the claims and listen to this mysterious melody, whatever it is. Come find out what he discovered, including images of the patterns and audio of the music they inspired.

  7. PHYSICS

    CERN Multimedia

    P. Sphicas

    The CPT project came to an end in December 2006 and its original scope is now shared among three new areas, namely Computing, Offline and Physics. In the physics area the basic change with respect to the previous system (where the PRS groups were charged with detector and physics object reconstruction and physics analysis) was the split of the detector PRS groups (the old ECAL-egamma, HCAL-jetMET, Tracker-btau and Muons) into two groups each: a Detector Performance Group (DPG) and a Physics Object Group. The DPGs are now led by the Commissioning and Run Coordinator deputy (Darin Acosta) and will appear in the correspond¬ing column in CMS bulletins. On the physics side, the physics object groups are charged with the reconstruction of physics objects, the tuning of the simulation (in collaboration with the DPGs) to reproduce the data, the provision of code for the High-Level Trigger, the optimization of the algorithms involved for the different physics analyses (in collaboration with the analysis gr...

  8. Parallel Monte Carlo transport modeling in the context of a time-dependent, three-dimensional multi-physics code

    Energy Technology Data Exchange (ETDEWEB)

    Procassini, R.J. [Lawrence Livermore National lab., CA (United States)

    1997-12-31

    The fine-scale, multi-space resolution that is envisioned for accurate simulations of complex weapons systems in three spatial dimensions implies flop-rate and memory-storage requirements that will only be obtained in the near future through the use of parallel computational techniques. Since the Monte Carlo transport models in these simulations usually stress both of these computational resources, they are prime candidates for parallelization. The MONACO Monte Carlo transport package, which is currently under development at LLNL, will utilize two types of parallelism within the context of a multi-physics design code: decomposition of the spatial domain across processors (spatial parallelism) and distribution of particles in a given spatial subdomain across additional processors (particle parallelism). This implementation of the package will utilize explicit data communication between domains (message passing). Such a parallel implementation of a Monte Carlo transport model will result in non-deterministic communication patterns. The communication of particles between subdomains during a Monte Carlo time step may require a significant level of effort to achieve a high parallel efficiency.

  9. Arsenic in New Jersey Coastal Plain streams, sediments, and shallow groundwater: effects from different geologic sources and anthropogenic inputs on biogeochemical and physical mobilization processes

    Science.gov (United States)

    Barringer, Julia L.; Reilly, Pamela A.; Eberl, Dennis D.; Mumford, Adam C.; Benzel, William M.; Szabo, Zoltan; Shourds, Jennifer L.; Young, Lily Y.

    2013-01-01

    Arsenic (As) concentrations in New Jersey Coastal Plain streams generally exceed the State Surface Water Quality Standard (0.017 micrograms per liter (µg/L)), but concentrations seldom exceed 1 µg/L in filtered stream-water samples, regardless of geologic contributions or anthropogenic inputs. Nevertheless, As concentrations in unfiltered stream water indicate substantial variation because of particle inputs from soils and sediments with differing As contents, and because of discharges from groundwater of widely varying chemistry.

  10. GPU acceleration of the Locally Selfconsistent Multiple Scattering code for first principles calculation of the ground state and statistical physics of materials

    Science.gov (United States)

    Eisenbach, Markus; Larkin, Jeff; Lutjens, Justin; Rennich, Steven; Rogers, James H.

    2017-02-01

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. We reimplement the scattering matrix calculation for GPUs with a block matrix inversion algorithm that only uses accelerator memory. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code.

  11. EPICS : input / output controller (IOC) application developer's guide. EPICS release 3.12 specific documentation.[Experimental Physics and Industrial Control System

    Energy Technology Data Exchange (ETDEWEB)

    Kraimer, M. R.

    2002-01-30

    This document describes the core software that resides in an Input/Output Controller (IOC), one of the major components of EPICS. EPICS consists of a set of software components and tools that Application Developers use to create a control system. The basic components are: OPI--Operator Interface. This is a UNIX based workstation which can run various EPICS tools; IOC--Input/Output Controller. This is a VME/VXI based chassis containing a processor, various I/O modules and VME modules that provide access to other I/O buses such as GPIB; and LAN--Local Area Network. This is the communication network which allows the IOCs and OPIs to communicate. EPICS provides a software component, Channel Access, which provides network transparent communication between a Channel Access client and an arbitrary number of Channel Access servers. This report is intended for anyone developing EPICS IOC databases and/or new record/device/driver support.

  12. Efficacy of physical activity interventions in post-natal populations: systematic review, meta-analysis and content coding of behaviour change techniques.

    Science.gov (United States)

    Gilinsky, Alyssa Sara; Dale, Hannah; Robinson, Clare; Hughes, Adrienne R; McInnes, Rhona; Lavallee, David

    2015-01-01

    This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'.

  13. Policy challenges in the fight against childhood obesity: low adherence in San Diego area schools to the California Education Code regulating physical education.

    Science.gov (United States)

    Consiglieri, G; Leon-Chi, L; Newfield, R S

    2013-01-01

    Assess the adherence to the Physical Education (PE) requirements per California Education Code in San Diego area schools. Surveys were administered anonymously to children and adolescents capable of physical activity, visiting a specialty clinic at Rady Children's Hospital San Diego. The main questions asked were their gender, grade, PE classes per week, and time spent doing PE. 324 surveys were filled, with 36 charter-school students not having to abide by state code excluded. We report on 288 students (59% females), mostly Hispanic (43%) or Caucasian (34%). In grades 1-6, 66.7% reported under the 200 min per 10 school days required by the PE code. Only 20.7% had daily PE. Average PE days/week was 2.6. In grades 7-12, 42.2% had reported under the 400 min per 10 school days required. Daily PE was noted in 47.8%. Average PE days/week was 3.4. Almost 17% had no PE, more so in the final two grades of high school (45.7%). There is low adherence to the California Physical Education mandate in the San Diego area, contributing to poor fitness and obesity. Lack of adequate PE is most evident in grades 1-6 and grades 11-12. Better resources, awareness, and enforcement are crucial.

  14. BOOK REVIEW Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr Cracking the Einstein Code: Relativity and the Birth of Black Hole Physics With an Afterword by Roy Kerr

    Science.gov (United States)

    Carr, Bernard

    2011-02-01

    General relativity is arguably the most beautiful scientific theory ever conceived but its status within mainstream physics has vacillated since it was proposed in 1915. It began auspiciously with the successful explanation of the precession of Mercury and the dramatic confirmation of light-bending in the 1919 solar eclipse expedition, which turned Einstein into an overnight celebrity. Though little noticed at the time, there was also Karl Schwarzschild's discovery of the spherically symmetric solution in 1916 (later used to predict the existence of black holes) and Alexander Friedmann's discovery of the cosmological solution in 1922 (later confirmed by the discovery of the cosmic expansion). Then for 40 years the theory was more or less forgotten, partly because most physicists were turning their attention to the even more radical developments of quantum theory but also because the equations were too complicated to solve except in situations involving special symmetries or very weak gravitational fields (where general relativity is very similar to Newtonian theory). Furthermore, it was not clear that strong gravitational fields would ever arise in the real universe and, even if they did, it seemed unlikely that Einstein's equations could then be solved. So research in relativity became a quiet backwater as mainstream physics swept forward in other directions. Even Einstein lost interest, turning his attention to the search for a unified field theory. This book tells the remarkable story of how the tide changed in 1963, when the 28-year-old New Zealand mathematician Roy Kerr discovered an exact solution of Einstein's equations which represents a rotating black hole, thereby cracking the code of the title. The paper was just a few pages long, it being left for others to fill in the extensive beautiful mathematics which underlay the result, but it ushered in a golden age of relativity and is now one of the most cited works in physics. Coincidentally, Kerr

  15. Reading input flooding versus listening input flooding: Can they boost speaking skill?

    Directory of Open Access Journals (Sweden)

    Rashtchi Mojgan

    2017-01-01

    Full Text Available The present study compared the effects of reading input flooding and listening input flooding techniques on the accuracy and complexity of Iranian EFL learners’ speaking skill. Participants were 66 homogeneous intermediate EFL learners who were randomly divided into three groups of 22: Reading input flooding group, listening input flooding group, and control group. The reading flooded input group was exposed to the numerous examples of the target structures through reading. In the same phase, the listening group was given relatively the same task, through listening. The participants’ monologues in the posttest were separately recorded, and later transcribed and coded in terms of accuracy and complexity through Bygate’s (2001 standard coding system. The results of ANCOVA indicated the outperformance of reading input flooding group. The study also supported the trade-off effects (Skehan, 1998, 2009 between accuracy and complexity.

  16. Expander chunked codes

    Science.gov (United States)

    Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu

    2015-12-01

    Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.

  17. Input parameters and scenarios, including economic inputs

    DEFF Research Database (Denmark)

    Boklund, Anette; Hisham Beshara Halasa, Tariq

    2012-01-01

    to day 8 after the herd was infected, and increased to 1 after day 8. The outputs from the epidemiological models were used as inputs in an economic model to calculate costs and losses for each epidemic. The costs of an epidemic were divided into direct and indirect costs. The direct costs consisted...

  18. Interface requirements to couple thermal-hydraulic codes to severe accident codes: ATHLET-CD

    Energy Technology Data Exchange (ETDEWEB)

    Trambauer, K. [GRS, Garching (Germany)

    1997-07-01

    The system code ATHLET-CD is being developed by GRS in cooperation with IKE and IPSN. Its field of application comprises the whole spectrum of leaks and large breaks, as well as operational and abnormal transients for LWRs and VVERs. At present the analyses cover the in-vessel thermal-hydraulics, the early phases of core degradation, as well as fission products and aerosol release from the core and their transport in the Reactor Coolant System. The aim of the code development is to extend the simulation of core degradation up to failure of the reactor pressure vessel and to cover all physically reasonable accident sequences for western and eastern LWRs including RMBKs. The ATHLET-CD structure is highly modular in order to include a manifold spectrum of models and to offer an optimum basis for further development. The code consists of four general modules to describe the reactor coolant system thermal-hydraulics, the core degradation, the fission product core release, and fission product and aerosol transport. Each general module consists of some basic modules which correspond to the process to be simulated or to its specific purpose. Besides the code structure based on the physical modelling, the code follows four strictly separated steps during the course of a calculation: (1) input of structure, geometrical data, initial and boundary condition, (2) initialization of derived quantities, (3) steady state calculation or input of restart data, and (4) transient calculation. In this paper, the transient solution method is briefly presented and the coupling methods are discussed. Three aspects have to be considered for the coupling of different modules in one code system. First is the conservation of masses and energy in the different subsystems as there are fluid, structures, and fission products and aerosols. Second is the convergence of the numerical solution and stability of the calculation. The third aspect is related to the code performance, and running time.

  19. Physical, taxonomic code, and other data from current meter and other instruments in New York Bight from DOLPHIN and other platforms; 14 March 1971 to 03 August 1975 (NODC Accession 7601385)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Physical, taxonomic code, and other data were collected using current meter and other instruments from DOLPHIN and other platforms in New York Bight. Data were...

  20. Calculation of surface and top of atmosphere radiative fluxes from physical quantities based on ISCCP data sets. 1: Method and sensitivity to input data uncertainties

    Science.gov (United States)

    Zhang, Y.-C.; Rossow, W. B.; Lacis, A. A.

    1995-01-01

    The largest uncertainty in upwelling shortwave (SW) fluxes (approximately equal 10-15 W/m(exp 2), regional daily mean) is caused by uncertainties in land surface albedo, whereas the largest uncertainty in downwelling SW at the surface (approximately equal 5-10 W/m(exp 2), regional daily mean) is related to cloud detection errors. The uncertainty of upwelling longwave (LW) fluxes (approximately 10-20 W/m(exp 2), regional daily mean) depends on the accuracy of the surface temperature for the surface LW fluxes and the atmospheric temperature for the top of atmosphere LW fluxes. The dominant source of uncertainty is downwelling LW fluxes at the surface (approximately equal 10-15 W/m(exp 2)) is uncertainty in atmospheric temperature and, secondarily, atmospheric humidity; clouds play little role except in the polar regions. The uncertainties of the individual flux components and the total net fluxes are largest over land (15-20 W/m(exp 2)) because of uncertainties in surface albedo (especially its spectral dependence) and surface temperature and emissivity (including its spectral dependence). Clouds are the most important modulator of the SW fluxes, but over land areas, uncertainties in net SW at the surface depend almost as much on uncertainties in surface albedo. Although atmospheric and surface temperature variations cause larger LW flux variations, the most notable feature of the net LW fluxes is the changing relative importance of clouds and water vapor with latitude. Uncertainty in individual flux values is dominated by sampling effects because of large natrual variations, but uncertainty in monthly mean fluxes is dominated by bias errors in the input quantities.

  1. GPU Acceleration of the Locally Selfconsistent Multiple Scattering Code for First Principles Calculation of the Ground State and Statistical Physics of Materials

    Science.gov (United States)

    Eisenbach, Markus

    The Locally Self-consistent Multiple Scattering (LSMS) code solves the first principles Density Functional theory Kohn-Sham equation for a wide range of materials with a special focus on metals, alloys and metallic nano-structures. It has traditionally exhibited near perfect scalability on massively parallel high performance computer architectures. We present our efforts to exploit GPUs to accelerate the LSMS code to enable first principles calculations of O(100,000) atoms and statistical physics sampling of finite temperature properties. Using the Cray XK7 system Titan at the Oak Ridge Leadership Computing Facility we achieve a sustained performance of 14.5PFlop/s and a speedup of 8.6 compared to the CPU only code. This work has been sponsored by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Material Sciences and Engineering Division and by the Office of Advanced Scientific Computing. This work used resources of the Oak Ridge Leadership Computing Facility, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.

  2. Quantum coding theorems

    Science.gov (United States)

    Holevo, A. S.

    1998-12-01

    ContentsI. IntroductionII. General considerations § 1. Quantum communication channel § 2. Entropy bound and channel capacity § 3. Formulation of the quantum coding theorem. Weak conversionIII. Proof of the direct statement of the coding theorem § 1. Channels with pure signal states § 2. Reliability function § 3. Quantum binary channel § 4. Case of arbitrary states with bounded entropyIV. c-q channels with input constraints § 1. Coding theorem § 2. Gauss channel with one degree of freedom § 3. Classical signal on quantum background noise Bibliography

  3. [Effects of litterfall and root input on soil physical and chemical properties in Pinus massoniana plantations in Three Gorges Reservoir Area, China].

    Science.gov (United States)

    Ge, Xiao-Gai; Huang, Zhi-Lin; Cheng, Rui-Mei; Zeng, Li-Xiong; Xiao, Wen-Fa; Tan, Ben-Wang

    2012-12-01

    An investigation was made on the soil physical and chemical properties in different-aged Pinus massoniana plantations in Three Gorges Reservoir Area under effects of litterfall and roots. The annual litter production in mature stand was 19.4% and 65.7% higher than that in nearly mature and middle-aged stands, respectively. The litter standing amount was in the sequence of mature stand > middle-aged stand > nearly mature stand, while the litter turnover coefficient was in the order of nearly mature stand (0.51) > mature stand (0.40) > middle-aged stand (0.36). The total root biomass, live root biomass, and dead root biomass were the highest in middle-aged stand, and the lowest in nearly mature stand. In middle-aged stand, soil total porosity was the highest, and soil bulk density was the lowest. Soil organic matter and total nitrogen contents were in the order of mature stand > middle-aged stand > nearly mature stand, soil nitrate nitrogen occupied a larger proportion of soil mineral N in nearly mature stand, while ammonium nitrogen accounted more in middle-aged and mature stands. In nearly mature stand, litter production was moderate but turnover coefficient was the highest, and soil nutrient contents were the lowest. In middle-aged stand, root biomass and soil total porosity were the highest, and soil bulk density were the lowest. In mature stand, root biomass was lower while soil nutrient contents were the highest. The increase of root biomass could improve soil physical properties.

  4. High-burn up 10 x 10 100%MOX ABWR core physics analysis with APOLLO2.8 and TRIPOLI-4.5 codes

    Energy Technology Data Exchange (ETDEWEB)

    Blaise, Patrick, E-mail: patrick.blaise@cea.f [Centre de Cadarache, DEN-CAD/DER/SPRC - building 230, F-13108 Saint Paul-Lez-Durance (France); Huot, Nicolas [Centre de Saclay, DEN-DANS/DM2S/SERMA - building 470, F-91191 Gif-sur-Yvette (France); Thiollay, Nicolas [Centre de Cadarache, DEN-CAD/DER/SPEX - building 238, F-13108 Saint Paul-Lez-Durance (France); Fougeras, Philippe; Santamarina, Alain [Centre de Cadarache, DEN-CAD/DER/SPRC - building 230, F-13108 Saint Paul-Lez-Durance (France)

    2010-07-15

    Within the frame of several extensive experimental core physics programs led between 1996 and 2008 between CEA and Japan Nuclear Energy Safety Organization (JNES), the FUBILA experiment has been conducted in the French EOLE Facility between 2005 and 2006 to obtain valuable data for the validation of core analysis methods related to full MOX advanced BWR and high-burn up BWR cores. During this experimental campaign, a particular FUBILA 10 x 10 Advanced BWR configuration devoted to the validation of high-burn up 100%MOX BWR bundles was built. It is characterized by an assembly average total Pu enrichment of 10.6 wt.% and in-channel void of 40%, representative of hot full power conditions at core mid-plane and average discharge burnup of 65 GWd/t. This paper details the validation work led on the TRIPOLI-4.5 Continuous Energy Monte Carlo code and APOLLO2.8/CEA2005V4 deterministic code package for the interpretation of this 10 x 10 high-burn up configuration. The APOLLO2.8/CEA2005V4 package relies on the deterministic lattice transport code APOLLO2.8 based on the Method of Characteristics (MOC), and its new CEA2005v4 multigroup library based on the latest JEFF-3.1.1 nuclear data file, processed also for the TRIPOLI-4.5 code. The results obtained on critical mass and radial pin-by-pin power distributions are presented. For critical mass, the calculation-to-experiment C-E on the k{sub eff} spreads from 300 pcm for TRIPOLI to 600 pcm for APOLLO2.8 in its Optimized BWR Scheme (OBS) in 26 groups. For pin-by-pin radial power distributions, all codes give acceptable results, with maximum discrepancies on C/E - 1 of the order of 3-4% for very heterogeneous bundles where P{sub max}/P{sub min} reaches 4, 2. These values are within 2 standard deviations of the experimental uncertainty. Those results demonstrate the capability of both codes and schemes to accurately predict Advanced High burnup 100%-MOX BWR key-neutron parameters.

  5. New versions of VENTURE/PC, a multigroup, multidimensional diffusion-depletion code system

    Energy Technology Data Exchange (ETDEWEB)

    Shapiro, A.; Huria, H.C. (Univ. of Cincinnati, OH (United States))

    1990-01-01

    VENTURE/PC is a microcomputer version of the BOLD VENTURE code system developed over a period of years at Oak Ridge National Laboratory (ORNL). It is a very complete and flexible multigroup, multidimensional, diffusion-depletion code system, which was developed for the Idaho National Engineering Laboratory personal computer (PC) based reactor physics and radiation shielding analysis package. The major characteristics of the code system were reported previously. Since that time, new versions of the code system have been developed. These new versions were designed to speed the convergence process, simplify the input stream, and extend the code to the state-of-the-art 32-bit microcomputers. The 16-bit version of the code is distributed by the Radiation Shielding Information Center (RSIC) at ORNL. The code has received widespread usage.

  6. Serial Input Output

    Energy Technology Data Exchange (ETDEWEB)

    Waite, Anthony; /SLAC

    2011-09-07

    Serial Input/Output (SIO) is designed to be a long term storage format of a sophistication somewhere between simple ASCII files and the techniques provided by inter alia Objectivity and Root. The former tend to be low density, information lossy (floating point numbers lose precision) and inflexible. The latter require abstract descriptions of the data with all that that implies in terms of extra complexity. The basic building blocks of SIO are streams, records and blocks. Streams provide the connections between the program and files. The user can define an arbitrary list of streams as required. A given stream must be opened for either reading or writing. SIO does not support read/write streams. If a stream is closed during the execution of a program, it can be reopened in either read or write mode to the same or a different file. Records represent a coherent grouping of data. Records consist of a collection of blocks (see next paragraph). The user can define a variety of records (headers, events, error logs, etc.) and request that any of them be written to any stream. When SIO reads a file, it first decodes the record name and if that record has been defined and unpacking has been requested for it, SIO proceeds to unpack the blocks. Blocks are user provided objects which do the real work of reading/writing the data. The user is responsible for writing the code for these blocks and for identifying these blocks to SIO at run time. To write a collection of blocks, the user must first connect them to a record. The record can then be written to a stream as described above. Note that the same block can be connected to many different records. When SIO reads a record, it scans through the blocks written and calls the corresponding block object (if it has been defined) to decode it. Undefined blocks are skipped. Each of these categories (streams, records and blocks) have some characteristics in common. Every stream, record and block has a name with the condition that each

  7. MDS MIC Catalog Inputs

    Science.gov (United States)

    Johnson-Throop, Kathy A.; Vowell, C. W.; Smith, Byron; Darcy, Jeannette

    2006-01-01

    This viewgraph presentation reviews the inputs to the MDS Medical Information Communique (MIC) catalog. The purpose of the group is to provide input for updating the MDS MIC Catalog and to request that MMOP assign Action Item to other working groups and FSs to support the MITWG Process for developing MIC-DDs.

  8. Rate-Compatible LDPC Codes with Linear Minimum Distance

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel

    2009-01-01

    A recently developed method of constructing protograph-based low-density parity-check (LDPC) codes provides for low iterative decoding thresholds and minimum distances proportional to block sizes, and can be used for various code rates. A code constructed by this method can have either fixed input block size or fixed output block size and, in either case, provides rate compatibility. The method comprises two submethods: one for fixed input block size and one for fixed output block size. The first mentioned submethod is useful for applications in which there are requirements for rate-compatible codes that have fixed input block sizes. These are codes in which only the numbers of parity bits are allowed to vary. The fixed-output-blocksize submethod is useful for applications in which framing constraints are imposed on the physical layers of affected communication systems. An example of such a system is one that conforms to one of many new wireless-communication standards that involve the use of orthogonal frequency-division modulation

  9. The Procions` code; Le code Procions

    Energy Technology Data Exchange (ETDEWEB)

    Deck, D.; Samba, G.

    1994-12-19

    This paper presents a new code to simulate plasmas generated by inertial confinement. This multi-kinds kinetic code is done with no angular approximation concerning ions and will work in plan and spherical geometry. First, the physical model is presented, using Fokker-Plank. Then, the numerical model is introduced in order to solve the Fokker-Plank operator under the Rosenbluth form. At the end, several numerical tests are proposed. (TEC). 17 refs., 27 figs.

  10. BETA. A code for {beta}{sub eff} measurement and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Yuichi; Okajima, Shigeaki; Sakurai, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1999-03-01

    The code BETA has been developed to calculate the following reactor physics parameters which are used for the {beta}{sub eff} measurement and analysis; Diven factor, Spatial correction factor (g-factor) for neutron correlation experiment, Adjoint weighted g-factor, Fission rate integrated in whole reactor and Adjoint weighted fission rate integrated in whole reactor: The code also calculates the effective delayed neutron fraction with different evaluated delayed neutron data. These parameters are calculated with using the forward and adjoint fluxes given by SLAROM, POPLARS or TWOTRAN-II. This report describes the input data and job control statements instructions, file requirements and sample input output data. (author)

  11. Group-Orthogonal Code-Division Multiplex: A Physical-Layer Enhancement for IEEE 802.11n Networks

    Directory of Open Access Journals (Sweden)

    Felip Riera-Palou

    2010-01-01

    Full Text Available The new standard for wireless local area networks (WLANs, named IEEE 802.11n, has been recently released. This new norm builds upon and remains compatible with the previous WLANs standards IEEE 802.11a/g while it is able to achieve transmission rates of up to 600 Mbps. These increased data rates are mainly a consequence of two important new features: (1 multiple antenna technology at transmission and reception, and (2 optional doubling of the system bandwidth thanks to the availability of an additional 20 MHz band. This paper proposes the use of Group-Orthogonal Code Division Multiplex (GO-CDM as a means to improve the performance of the 802.11n standard by further exploiting the inherent frequency diversity. It is explained why GO-CDM synergistically matches with the two aforementioned new features and the performance gains it can offer under different configurations is illustrated. Furthermore, the effects that group-orthogonal has on key implementation issues such as channel estimation, carrier frequency offset, and peak-to-average power ratio (PAPR are also considered.

  12. PLEXOS Input Data Generator

    Energy Technology Data Exchange (ETDEWEB)

    2017-02-01

    The PLEXOS Input Data Generator (PIDG) is a tool that enables PLEXOS users to better version their data, automate data processing, collaborate in developing inputs, and transfer data between different production cost modeling and other power systems analysis software. PIDG can process data that is in a generalized format from multiple input sources, including CSV files, PostgreSQL databases, and PSS/E .raw files and write it to an Excel file that can be imported into PLEXOS with only limited manual intervention.

  13. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  14. Maestro and Castro: Simulation Codes for Astrophysical Flows

    Science.gov (United States)

    Zingale, Michael; Almgren, Ann; Beckner, Vince; Bell, John; Friesen, Brian; Jacobs, Adam; Katz, Maximilian P.; Malone, Christopher; Nonaka, Andrew; Zhang, Weiqun

    2017-01-01

    Stellar explosions are multiphysics problems—modeling them requires the coordinated input of gravity solvers, reaction networks, radiation transport, and hydrodynamics together with microphysics recipes to describe the physics of matter under extreme conditions. Furthermore, these models involve following a wide range of spatial and temporal scales, which puts tough demands on simulation codes. We developed the codes Maestro and Castro to meet the computational challenges of these problems. Maestro uses a low Mach number formulation of the hydrodynamics to efficiently model convection. Castro solves the fully compressible radiation hydrodynamics equations to capture the explosive phases of stellar phenomena. Both codes are built upon the BoxLib adaptive mesh refinement library, which prepares them for next-generation exascale computers. Common microphysics shared between the codes allows us to transfer a problem from the low Mach number regime in Maestro to the explosive regime in Castro. Importantly, both codes are freely available (https://github.com/BoxLib-Codes). We will describe the design of the codes and some of their science applications, as well as future development directions.Support for development was provided by NSF award AST-1211563 and DOE/Office of Nuclear Physics grant DE-FG02-87ER40317 to Stony Brook and by the Applied Mathematics Program of the DOE Office of Advance Scientific Computing Research under US DOE contract DE-AC02-05CH11231 to LBNL.

  15. Measuring input synchrony in the Ornstein-Uhlenbeck neuronal model through input parameter estimation.

    Science.gov (United States)

    Koutsou, Achilleas; Kanev, Jacob; Christodoulou, Chris

    2013-11-06

    We present a method of estimating the input parameters and through them, the input synchrony, of a stochastic leaky integrate-and-fire neuronal model based on the Ornstein-Uhlenbeck process when it is driven by time-dependent sinusoidal input signal and noise. By driving the neuron using sinusoidal inputs, we simulate the effects of periodic synchrony on the membrane voltage and the firing of the neuron, where the peaks of the sine wave represent volleys of synchronised input spikes. Our estimation methods allow us to measure the degree of synchrony driving the neuron in terms of the input sine wave parameters, using the output spikes of the model and the membrane potential. In particular, by estimating the frequency of the synchronous input volleys and averaging the estimates of the level of input activity at corresponding intervals of the input signal, we obtain fairly accurate estimates of the baseline and peak activity of the input, which in turn define the degrees of synchrony. The same procedure is also successfully applied in estimating the baseline and peak activity of the noise. This article is part of a Special Issue entitled Neural Coding 2012. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Physics study of microbeam radiation therapy with PSI-version of Monte Carlo code GEANT as a new computational tool

    CERN Document Server

    Stepanek, J; Laissue, J A; Lyubimova, N; Di Michiel, F; Slatkin, D N

    2000-01-01

    Microbeam radiation therapy (MRT) is a currently experimental method of radiotherapy which is mediated by an array of parallel microbeams of synchrotron-wiggler-generated X-rays. Suitably selected, nominally supralethal doses of X-rays delivered to parallel microslices of tumor-bearing tissues in rats can be either palliative or curative while causing little or no serious damage to contiguous normal tissues. Although the pathogenesis of MRT-mediated tumor regression is not understood, as in all radiotherapy such understanding will be based ultimately on our understanding of the relationships among the following three factors: (1) microdosimetry, (2) damage to normal tissues, and (3) therapeutic efficacy. Although physical microdosimetry is feasible, published information on MRT microdosimetry to date is computational. This report describes Monte Carlo-based computational MRT microdosimetry using photon and/or electron scattering and photoionization cross-section data in the 1 e V through 100 GeV range distrib...

  17. INTERLIS Language for Modelling Legal 3D Spaces and Physical 3D Objects by Including Formalized Implementable Constraints and Meaningful Code Lists

    Directory of Open Access Journals (Sweden)

    Eftychia Kalogianni

    2017-10-01

    Full Text Available The Land Administration Domain Model (LADM is one of the first ISO spatial domain standards, and has been proven one of the best candidates for unambiguously representing 3D Rights, Restrictions and Responsibilities. Consequently, multiple LADM-based country profile implementations have been developed since the approval of LADM as an ISO standard; however, there is still a gap for technical implementations. This paper summarizes LADM implementation approaches distilled from relevant publications available to date. Models based on land administration standards do focus on the legal aspects of urban structures; however, the juridical boundaries in 3D are sometimes (partly bound by the corresponding physical objects, leading to ambiguous situations. To that end, more integrated approaches are being developed at a conceptual level, and it is evident that the evaluation and validation of 3D legal and physical models—both separately and together in the form of an integrated model—is vital. This paper briefly presents the different approaches to legal and physical integration that have been developed in the last decade, while the need for more explicit relationships between legal and physical notions is highlighted. In this regard, recent experience gained from implementing INTERLIS, the Swiss standard that enables land information system communications, in LADM-based country profiles, suggests the possibility of an integrated LADM/INTERLIS approach. Considering semantic interoperability within integrated models, the need for more formal semantics is underlined by introducing formalization of code lists and explicit definition of constraints. Last but not least, the first results of case studies based on the generic LADM/INTERLIS approach are presented.

  18. Cyclone Codes

    OpenAIRE

    Schindelhauer, Christian; Jakoby, Andreas; Köhler, Sven

    2016-01-01

    We introduce Cyclone codes which are rateless erasure resilient codes. They combine Pair codes with Luby Transform (LT) codes by computing a code symbol from a random set of data symbols using bitwise XOR and cyclic shift operations. The number of data symbols is chosen according to the Robust Soliton distribution. XOR and cyclic shift operations establish a unitary commutative ring if data symbols have a length of $p-1$ bits, for some prime number $p$. We consider the graph given by code sym...

  19. Some calculator programs for particle physics. [LEGENDRE, ASSOCIATED LEGENDRE, CONFIDENCE, TWO BODY, ELLIPSE, DALITZ RECTANGULAR, and DALITZ TRIANGULAR codes

    Energy Technology Data Exchange (ETDEWEB)

    Wohl, C.G.

    1982-01-01

    Seven calculator programs that do simple chores that arise in elementary particle physics are given. LEGENDRE evaluates the Legendre polynomial series ..sigma..a/sub n/P/sub n/(x) at a series of values of x. ASSOCIATED LEGENDRE evaluates the first-associated Legendre polynomial series ..sigma..b/sub n/P/sub n//sup 1/(x) at a series of values of x. CONFIDENCE calculates confidence levels for chi/sup 2/, Gaussian, or Poisson probability distributions. TWO BODY calculates the c.m. energy, the initial- and final-state c.m. momenta, and the extreme values of t and u for a 2-body reaction. ELLIPSE calculates coordinates of points for drawing an ellipse plot showing the kinematics of a 2-body reaction or decay. DALITZ RECTANGULAR calculates coordinates of points on the boundary of a rectangular Dalitz plot. DALITZ TRIANGULAR calculates coordinates of points on the boundary of a triangular Dalitz plot. There are short versions of CONFIDENCE (EVEN N and POISSON) that calculate confidence levels for the even-degree-of-freedom-chi/sup 2/ and the Poisson cases, and there is a short version of TWO BODY (CM) that calculates just the c.m. energy and initial-state momentum. The programs are written for the HP-97 calculator. (WHK)

  20. Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes

    OpenAIRE

    I. P. Pakhotin; A. Y. Drozdov; Yuri Shprits; R. J. Boynton; D. A. Subbotin; M. A. Balikhin

    2014-01-01

    This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system ...

  1. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  2. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...

  3. Nuclear reaction inputs based on effective interactions

    Energy Technology Data Exchange (ETDEWEB)

    Hilaire, S.; Peru, S.; Dubray, N.; Dupuis, M.; Bauge, E. [CEA, DAM, DIF, Arpajon (France); Goriely, S. [Universite Libre de Bruxelles, Institut d' Astronomie et d' Astrophysique, CP-226, Brussels (Belgium)

    2016-11-15

    Extensive nuclear structure studies have been performed for decades using effective interactions as sole input. They have shown a remarkable ability to describe rather accurately many types of nuclear properties. In the early 2000 s, a major effort has been engaged to produce nuclear reaction input data out of the Gogny interaction, in order to challenge its quality also with respect to nuclear reaction observables. The status of this project, well advanced today thanks to the use of modern computers as well as modern nuclear reaction codes, is reviewed and future developments are discussed. (orig.)

  4. code {poems}

    Directory of Open Access Journals (Sweden)

    Ishac Bertran

    2012-08-01

    Full Text Available "Exploring the potential of code to communicate at the level of poetry," the code­ {poems} project solicited submissions from code­writers in response to the notion of a poem, written in a software language which is semantically valid. These selections reveal the inner workings, constitutive elements, and styles of both a particular software and its authors.

  5. Rich input i engelskundervisningen

    DEFF Research Database (Denmark)

    Melgaard, Bente; Guttesen, Maria Josephine; Jacobsen, Susanne Karen

    2017-01-01

    Der er mange gode grunde til at bruge autentiske tekster i engelskundervisningen på alle niveauer. Eleverne skal i engelskundervisningen stifte bekendtskab med et varieret input på fremmedsproget, og det at læse autentiske tekster er et møde med sprog som målrettet målsprogsbrugere, og giver...

  6. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...

  7. Access to Research Inputs

    DEFF Research Database (Denmark)

    Czarnitzki, Dirk; Grimpe, Christoph; Pellens, Maikel

    2015-01-01

    sciences, natural sciences, engineering, and social sciences, that scientists who receive industry funding are twice as likely to deny requests for research inputs as those who do not. Receiving external funding in general does not affect denying others access. Scientists who receive external funding...

  8. ColloInputGenerator

    DEFF Research Database (Denmark)

    2013-01-01

    This is a very simple program to help you put together input files for use in Gries' (2007) R-based collostruction analysis program. It basically puts together a text file with a frequency list of lexemes in the construction and inserts a column where you can add the corpus frequencies. It requir...

  9. The FLUKA code: an overview

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F [University of Pavia and INFN (Italy); Battistoni, G [University of Milan and INFN (Italy); Campanella, M; Carboni, M; Cerutti, F [University of Milan and INFN (Italy); Empl, A [University of Houston, Houston (United States); Fasso, A [SLAC, Stanford (United States); Ferrari, A [CERN, CH-1211 Geneva (Switzerland); Gadioli, E [University of Milan and INFN (Italy); Garzelli, M V [University of Milan and INFN (Italy); Lantz, M [University of Milan and INFN (Italy); Liotta, M [University of Pavia and INFN (Italy); Mairani, A [University of Pavia and INFN (Italy); Mostacci, A [Laboratori Nazionali di Frascati, INFN (Italy); Muraro, S [University of Milan and INFN (Italy); Ottolenghi, A [University of Pavia and INFN (Italy); Pelliccioni, M [Laboratori Nazionali di Frascati, INFN (Italy); Pinsky, L [University of Houston, Houston (United States); Ranft, J [Siegen University, Siegen (Germany); Roesler, S [CERN, CH-1211 Geneva (Switzerland); Sala, P R [University of Milan and INFN (Italy); Scannicchio, D [University of Pavia and INFN (Italy); Trovati, S [University of Pavia and INFN (Italy); Villari, R; Wilson, T [Johnson Space Center, NASA (United States); Zapp, N [Johnson Space Center, NASA (United States); Vlachoudis, V [CERN, CH-1211 Geneva (Switzerland)

    2006-05-15

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  10. The FLUKA Code: an Overview

    Energy Technology Data Exchange (ETDEWEB)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M.V.; Lantz, M.; Liotta, M.; Mairani,; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P.R.; /Milan U. /INFN, Milan /Pavia U. /INFN, Pavia /CERN /Siegen U.

    2005-11-09

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  11. The FLUKA code: an overview

    Science.gov (United States)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fassò, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.; Zapp, N.; Vlachoudis, V.

    2006-05-01

    FLUKA is a multipurpose MonteCarlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  12. Automation of Geometry Input for Building Code Compliance Check

    DEFF Research Database (Denmark)

    Petrova, Ekaterina Aleksandrova; Johansen, Peter Lind; Jensen, Rasmus Lund

    2017-01-01

    . That has left the industry in constant pursuit of possibilities for integration of the tool within the Building Information Modelling environment so that the potential provided by the latter can be harvested and the processed can be optimized. This paper presents a solution for automated data extraction...

  13. A notion of sufficient input

    OpenAIRE

    Bertrand Crettez; Philippe Michel

    2003-01-01

    In this paper, we study a notion of sufficient input, i.e. input that allows to produce at least one unit of output when the other inputs are fixed at any positive level. We show that such an input allows to produce any positive amount of production. The main property of sufficient inputs is as follows. A input is sufficient if and only if the unit cost goes to zero when its price goes to zero.

  14. Towards an affordable alternative educational video game input device

    CSIR Research Space (South Africa)

    Smith, Adrew C

    2008-05-01

    Full Text Available The authors present the prototype design results of an alternative physical educational video gaming input device. The device elicits increased physical activity from the players as compared to the compact gaming controller. Complicated...

  15. Overview of the ArbiTER edge plasma eigenvalue code

    Science.gov (United States)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  16. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    Science.gov (United States)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  17. Sharing code.

    Science.gov (United States)

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  18. Analog Coding.

    Science.gov (United States)

    CODING, ANALOG SYSTEMS), INFORMATION THEORY, DATA TRANSMISSION SYSTEMS , TRANSMITTER RECEIVERS, WHITE NOISE, PROBABILITY, ERRORS, PROBABILITY DENSITY FUNCTIONS, DIFFERENTIAL EQUATIONS, SET THEORY, COMPUTER PROGRAMS

  19. Surface code implementation of block code state distillation

    Science.gov (United States)

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A〉 state given 15 input copies. New block code state distillation methods can produce k improved |A〉 states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  20. Users manual for CAFE-3D : a computational fluid dynamics fire code.

    Energy Technology Data Exchange (ETDEWEB)

    Khalil, Imane; Lopez, Carlos; Suo-Anttila, Ahti Jorma (Alion Science and Technology, Albuquerque, NM)

    2005-03-01

    The Container Analysis Fire Environment (CAFE) computer code has been developed to model all relevant fire physics for predicting the thermal response of massive objects engulfed in large fires. It provides realistic fire thermal boundary conditions for use in design of radioactive material packages and in risk-based transportation studies. The CAFE code can be coupled to commercial finite-element codes such as MSC PATRAN/THERMAL and ANSYS. This coupled system of codes can be used to determine the internal thermal response of finite element models of packages to a range of fire environments. This document is a user manual describing how to use the three-dimensional version of CAFE, as well as a description of CAFE input and output parameters. Since this is a user manual, only a brief theoretical description of the equations and physical models is included.

  1. Divergence coding for convolutional codes

    Directory of Open Access Journals (Sweden)

    Valery Zolotarev

    2017-01-01

    Full Text Available In the paper we propose a new coding/decoding on the divergence principle. A new divergent multithreshold decoder (MTD for convolutional self-orthogonal codes contains two threshold elements. The second threshold element decodes the code with the code distance one greater than for the first threshold element. Errorcorrecting possibility of the new MTD modification have been higher than traditional MTD. Simulation results show that the performance of the divergent schemes allow to approach area of its effective work to channel capacity approximately on 0,5 dB. Note that we include the enough effective Viterbi decoder instead of the first threshold element, the divergence principle can reach more. Index Terms — error-correcting coding, convolutional code, decoder, multithreshold decoder, Viterbi algorithm.

  2. Comprehensible input and learning outcomes

    OpenAIRE

    Salazar Campillo, Patricia

    1996-01-01

    Segones Jornades de Foment de la Investigació de la FCHS (Any 1996-1997) In Krashen’s terms, optimal input has to be comprehensible to the learner if we want acquisition to take place. An overview of the literature on input indicates two ways of making input comprehensible: the first one is to premodify input before it is offered to the learner, (premodified input), and the second one is to negotiate the input through interaction (interactionally modified input). The aim of the...

  3. PHYSICS

    CERN Multimedia

    P. Sphicas

    There have been three physics meetings since the last CMS week: “physics days” on March 27-29, the Physics/ Trigger week on April 23-27 and the most recent physics days on May 22-24. The main purpose of the March physics days was to finalize the list of “2007 analyses”, i.e. the few topics that the physics groups will concentrate on for the rest of this calendar year. The idea is to carry out a full physics exercise, with CMSSW, for select physics channels which test key features of the physics objects, or represent potential “day 1” physics topics that need to be addressed in advance. The list of these analyses was indeed completed and presented in the plenary meetings. As always, a significant amount of time was also spent in reviewing the status of the physics objects (reconstruction) as well as their usage in the High-Level Trigger (HLT). The major event of the past three months was the first “Physics/Trigger week” in Apri...

  4. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...... development, Speaking Code unfolds an argument to undermine the distinctions between criticism and practice, and to emphasize the aesthetic and political aspects of software studies. Not reducible to its functional aspects, program code mirrors the instability inherent in the relationship of speech......; alternatives to mainstream development, from performances of the live-coding scene to the organizational forms of commons-based peer production; the democratic promise of social media and their paradoxical role in suppressing political expression; and the market’s emptying out of possibilities for free...

  5. Input or intimacy

    Directory of Open Access Journals (Sweden)

    Judit Navracsics

    2014-01-01

    Full Text Available According to the critical period hypothesis, the earlier the acquisition of a second language starts, the better. Owing to the plasticity of the brain, up until a certain age a second language can be acquired successfully according to this view. Early second language learners are commonly said to have an advantage over later ones especially in phonetic/phonological acquisition. Native-like pronunciation is said to be most likely to be achieved by young learners. However, there is evidence of accentfree speech in second languages learnt after puberty as well. Occasionally, on the other hand, a nonnative accent may appear even in early second (or third language acquisition. Cross-linguistic influences are natural in multilingual development, and we would expect the dominant language to have an impact on the weaker one(s. The dominant language is usually the one that provides the largest amount of input for the child. But is it always the amount that counts? Perhaps sometimes other factors, such as emotions, ome into play? In this paper, data obtained from an EnglishPersian-Hungarian trilingual pair of siblings (under age 4 and 3 respectively is analyzed, with a special focus on cross-linguistic influences at the phonetic/phonological levels. It will be shown that beyond the amount of input there are more important factors that trigger interference in multilingual development.

  6. "Gtool5": a Fortran90 library of input/output interfaces for self-descriptive multi-dimensional numerical data

    Directory of Open Access Journals (Sweden)

    M. Ishiwatari

    2012-04-01

    Full Text Available A Fortran90 input/output library, "gtool5", is developed for use with numerical simulation models in the fields of Earth and planetary sciences. The use of this library will simplify implementation of input/output operations into program code in a consolidated form independent of the size and complexity of the software and data. The library also enables simple specification of the metadata needed for post-processing and visualization of the data. These aspects improve the readability of simulation code, which facilitates the simultaneous performance of multiple numerical experiments with different software and efficiency in examining and comparing the numerical results. The library is expected to provide a common software platform to reinforce research on, for instance, the atmosphere and ocean, where a close combination of multiple simulation models with a wide variety of complexity of physics implementations from massive climate models to simple geophysical fluid dynamics models is required.

  7. PHYSICS

    CERN Multimedia

    D. Acosta

    2010-01-01

    A remarkable amount of progress has been made in Physics since the last CMS Week in June given the exponential growth in the delivered LHC luminosity. The first major milestone was the delivery of a variety of results to the ICHEP international conference held in Paris this July. For this conference, CMS prepared 15 Physics Analysis Summaries on physics objects and 22 Summaries on new and interesting physics measurements that exploited the luminosity recorded by the CMS detector. The challenge was incorporating the largest batch of luminosity that was delivered only days before the conference (300 nb-1 total). The physics covered from this initial running period spanned hadron production measurements, jet production and properties, electroweak vector boson production, and even glimpses of the top quark. Since then, the accumulated integrated luminosity has increased by a factor of more than 100, and all groups have been working tremendously hard on analysing this dataset. The September Physics Week was held ...

  8. PHYSICS

    CERN Multimedia

    J. Incandela

    There have been numerous developments in the physics area since the September CMS week. The biggest single event was the Physics/Trigger week in the end of Octo¬ber, whereas in terms of ongoing activities the “2007 analyses” went into high gear. This was in parallel with participation in CSA07 by the physics groups. On the or¬ganizational side, the new conveners of the physics groups have been selected, and a new database for man¬aging physics analyses has been deployed. Physics/Trigger week The second Physics-Trigger week of 2007 took place during the week of October 22-26. The first half of the week was dedicated to working group meetings. The ple¬nary Joint Physics-Trigger meeting took place on Wednesday afternoon and focused on the activities of the new Trigger Studies Group (TSG) and trigger monitoring. Both the Physics and Trigger organizations are now focused on readiness for early data-taking. Thus, early trigger tables and preparations for calibr...

  9. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  10. Coding labour

    National Research Council Canada - National Science Library

    McCosker, Anthony; Milne, Esther

    2014-01-01

    ... software. Code encompasses the laws that regulate human affairs and the operation of capital, behavioural mores and accepted ways of acting, but it also defines the building blocks of life as DNA...

  11. PHYSICS

    CERN Multimedia

    Submitted by

    Physics Week: plenary meeting on physics groups plans for startup (14–15 May 2008) The Physics Objects (POG) and Physics Analysis (PAG) Groups presented their latest developments at the plenary meeting during the Physics Week. In the presentations particular attention was given to startup plans and readiness for data-taking. Many results based on the recent cosmic run were shown. A special Workshop on SUSY, described in a separate section, took place the day before the plenary. At the meeting, we had also two special DPG presentations on “Tracker and Muon alignment with CRAFT” (Ernesto Migliore) and “Calorimeter studies with CRAFT” (Chiara Rovelli). We had also a report from Offline (Andrea Rizzi) and Computing (Markus Klute) on the San Diego Workshop, described elsewhere in this bulletin. Tracking group (Boris Mangano). The level of sophistication of the tracking software increased significantly over the last few months: V0 (K0 and Λ) reconstr...

  12. Transionospheric Propagation Code (TIPC)

    Science.gov (United States)

    Roussel-Dupre, Robert; Kelley, Thomas A.

    1990-10-01

    The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of VHF signals following propagation through the ionosphere. The code is written in FORTRAN 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, delta times of arrival (DTOA) study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of DTOAs vs TECs for a specified pair of receivers.

  13. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  14. The FLUKA Code: Description And Benchmarking

    Energy Technology Data Exchange (ETDEWEB)

    Battistoni, Giuseppe; Muraro, S.; Sala, Paola R.; /INFN, Milan; Cerutti, Fabio; Ferrari, A.; Roesler, Stefan; /CERN; Fasso, A.; /SLAC; Ranft, J.; /Siegen U.

    2007-09-18

    The physics model implemented inside the FLUKA code are briefly described, with emphasis on hadronic interactions. Examples of the capabilities of the code are presented including basic (thin target) and complex benchmarks.

  15. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  16. Linear vection in virtual environments can be strengthened by discordant inertial input.

    Science.gov (United States)

    Wright, W Geoffrey

    2009-01-01

    Visual and gravitoinertial sensory inputs are integrated by the central nervous system to provide a compelling and veridical sense of spatial orientation and motion. Although it's known that visual input alone can drive this perception, questions remain as to how vestibular/ proprioceptive (i.e. inertial) inputs integrate with visual input to affect this process. This was investigated further by combining sinusoidal vertical linear oscillation (5 amplitudes between 0m and +/-0.8m) with two different virtual visual inputs. Visual scenes were viewed in a large field-of-view head-mounted display (HMD), which depicted an enriched, hi-res, dynamic image of the actual test chamber from the perspective of a subject seated in the linear motion device. The scene either depicted horizontal (+/-0.7m) or vertical (+/-0.8m) linear 0.2Hz sinusoidal translation. Horizontal visual motion with vertical inertial motion represents a 90 degrees spatial shift. Vertical visual motion with vertical inertial motion whereby the highest physical point matches the lowest visual point and vice versa represents a 180 degrees temporal shift, i.e. opposite of what one experiences in reality. Inertial-only stimulation without visual input was identified as vertical linear oscillation with accurate reports of acceleration peaks and troughs, but a slight tendency to underestimate amplitude. Visual-only (stationary) stimulation was less compelling than combined visual+inertial conditions. In visual+inertial conditions, visual input dominated the direction of perceived self-motion, however, increasing the inertial amplitude increased how compelling this non-veridical perception was. That is, perceived vertical self-motion was most compelling when inertial stimulation was maximal, despite perceiving "up" when physically "down" and vice versa. Similarly, perceived horizontal self-motion was most compelling when vertical inertial motion was at maximum amplitude. "Cross-talk" between visual and

  17. PHYSICS

    CERN Multimedia

    D. Futyan

    A lot has transpired on the “Physics” front since the last CMS Bulletin. The summer was filled with preparations of new Monte Carlo samples based on CMSSW_3, the finalization of all the 10 TeV physics analyses [in total 50 analyses were approved] and the preparations for the Physics Week in Bologna. A couple weeks later, the “October Exercise” commenced and ran through an intense two-week period. The Physics Days in October were packed with a number of topics that are relevant to data taking, in a number of “mini-workshops”: the luminosity measurement, the determination of the beam spot and the measurement of the missing transverse energy (MET) were the three main topics.  Physics Week in Bologna The second physics week in 2009 took place in Bologna, Italy, on the week of Sep 7-11. The aim of the week was to review and establish how ready we are to do physics with the early collisions at the LHC. The agenda of the week was thus pac...

  18. PHYSICS

    CERN Multimedia

    D. Futyan

    A lot has transpired on the “Physics” front since the last CMS Bulletin. The summer was filled with preparations of new Monte Carlo samples based on CMSSW_3, the finalization of all the 10 TeV physics analyses [in total 50 analyses were approved] and the preparations for the Physics Week in Bologna. A couple weeks later, the “October Exercise” commenced and ran through an intense two-week period. The Physics Days in October were packed with a number of topics that are relevant to data taking, in a number of “mini-workshops”: the luminosity measurement, the determination of the beam spot and the measurement of the missing transverse energy (MET) were the three main topics.   Physics Week in Bologna The second physics week in 2009 took place in Bologna, Italy, on the week of Sep 7-11. The aim of the week was to review and establish (we hoped) the readiness of CMS to do physics with the early collisions at the LHC. The agenda of the...

  19. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  20. Input data to run Landis-II

    Science.gov (United States)

    DeJager, Nathan R.

    2017-01-01

    The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location of each mapcode, b) Cohort_ages, which includes the ages for each tree species-cohort within each mapcode, c) Ecoregions, which consist of different regions of soils and climate, d) Ecoregion_codes, which define the ecoregions, and e) Species_Params, which link the potential establishment and growth rates for each species with each ecoregion.

  1. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  2. PHYSICS

    CERN Multimedia

    Joe Incandela

    There have been two plenary physics meetings since the December CMS week. The year started with two workshops, one on the measurements of the Standard Model necessary for “discovery physics” as well as one on the Physics Analysis Toolkit (PAT). Meanwhile the tail of the “2007 analyses” is going through the last steps of approval. It is expected that by the end of January all analyses will have converted to using the data from CSA07 – which include the effects of miscalibration and misalignment. January Physics Days The first Physics Days of 2008 took place on January 22-24. The first two days were devoted to comprehensive re¬ports from the Detector Performance Groups (DPG) and Physics Objects Groups (POG) on their planning and readiness for early data-taking followed by approvals of several recent studies. Highlights of POG presentations are included below while the activities of the DPGs are covered elsewhere in this bulletin. January 24th was devo...

  3. PHYSICS

    CERN Multimedia

    J. Incandela

    The all-plenary format of the CMS week in Cyprus gave the opportunity to the conveners of the physics groups to present the plans of each physics analysis group for tackling early physics analyses. The presentations were complete, so all are encouraged to browse through them on the Web. There is a wealth of information on what is going on, by whom and on what basis and priority. The CMS week was followed by two CMS “physics events”, the ICHEP08 days and the physics days in July. These were two weeks dedicated to either the approval of all the results that would be presented at ICHEP08, or to the review of all the other Monte-Carlo based analyses that were carried out in the context of our preparations for analysis with the early LHC data (the so-called “2008 analyses”). All this was planned in the context of the beginning of a ramp down of these Monte Carlo efforts, in anticipation of data.  The ICHEP days are described below (agenda and talks at: http://indic...

  4. ITS version 5.0 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2004-06-01

    ITS is a powerful and user-friendly software package permitting state of the art Monte Carlo solution of linear time-independent couple electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2)multigroup codes with adjoint transport capabilities, and (3) parallel implementations of all ITS codes. Moreover the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  5. Network Coding

    Indian Academy of Sciences (India)

    Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.

  6. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  7. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621. Keywords.

  8. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  9. Physics

    CERN Document Server

    Cullen, Katherine

    2005-01-01

    Defined as the scientific study of matter and energy, physics explains how all matter behaves. Separated into modern and classical physics, the study attracts both experimental and theoretical physicists. From the discovery of the process of nuclear fission to an explanation of the nature of light, from the theory of special relativity to advancements made in particle physics, this volume profiles 10 pioneers who overcame tremendous odds to make significant breakthroughs in this heavily studied branch of science. Each chapter contains relevant information on the scientist''s childhood, research, discoveries, and lasting contributions to the field and concludes with a chronology and a list of print and Internet references specific to that individual.

  10. SDR Input Power Estimation Algorithms

    Science.gov (United States)

    Nappier, Jennifer M.; Briones, Janette C.

    2013-01-01

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  11. SDR input power estimation algorithms

    Science.gov (United States)

    Briones, J. C.; Nappier, J. M.

    The General Dynamics (GD) S-Band software defined radio (SDR) in the Space Communications and Navigation (SCAN) Testbed on the International Space Station (ISS) provides experimenters an opportunity to develop and demonstrate experimental waveforms in space. The SDR has an analog and a digital automatic gain control (AGC) and the response of the AGCs to changes in SDR input power and temperature was characterized prior to the launch and installation of the SCAN Testbed on the ISS. The AGCs were used to estimate the SDR input power and SNR of the received signal and the characterization results showed a nonlinear response to SDR input power and temperature. In order to estimate the SDR input from the AGCs, three algorithms were developed and implemented on the ground software of the SCAN Testbed. The algorithms include a linear straight line estimator, which used the digital AGC and the temperature to estimate the SDR input power over a narrower section of the SDR input power range. There is a linear adaptive filter algorithm that uses both AGCs and the temperature to estimate the SDR input power over a wide input power range. Finally, an algorithm that uses neural networks was designed to estimate the input power over a wide range. This paper describes the algorithms in detail and their associated performance in estimating the SDR input power.

  12. 77 FR 34020 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2012-06-08

    ... Fire Protection in Wastewater 7/8/2013. Treatment and Collection Facilities. NFPA 850--2010 Recommended... the Storage and Handling of 6/22/2012. Cellulose Nitrate Film. NFPA 45--2011 Standard on Fire...

  13. 77 FR 67628 - National Fire Codes: Request for Public Input for Revision of Codes and Standards

    Science.gov (United States)

    2012-11-13

    ... 820--2012 Standard for Fire 7/8/2013 Protection in Wastewater Treatment and Collection Facilities... Storage and Handling of Cellulose Nitrate Film. NFPA 45--2011 Standard on Fire 1/4/2013 Protection for...

  14. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  15. RIPL - Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    Science.gov (United States)

    Capote, R.; Herman, M.; Obložinský, P.; Young, P. G.; Goriely, S.; Belgya, T.; Ignatyuk, A. V.; Koning, A. J.; Hilaire, S.; Plujko, V. A.; Avrigeanu, M.; Bersillon, O.; Chadwick, M. B.; Fukahori, T.; Ge, Zhigang; Han, Yinlu; Kailas, S.; Kopecky, J.; Maslov, V. M.; Reffo, G.; Sin, M.; Soukhovitskii, E. Sh.; Talou, P.

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and γ-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains

  16. RIPL-Reference Input Parameter Library for Calculation of Nuclear Reactions and Nuclear Data Evaluations

    Energy Technology Data Exchange (ETDEWEB)

    Capote, R.; Herman, M.; Capote,R.; Herman,M.; Oblozinsky,P.; Young,P.G.; Goriely,S.; Belgy,T.; Ignatyuk,A.V.; Koning,A.J.; Hilaire,S.; Pljko,V.A.; Avrigeanu,M.; Bersillon,O.; Chadwick,M.B.; Fukahori,T.; Ge, Zhigang; Han,Yinl,; Kailas,S.; Kopecky,J.; Maslov,V.M.; Reffo,G.; Sin,M.; Soukhovitskii,E.Sh.; Talou,P

    2009-12-01

    We describe the physics and data included in the Reference Input Parameter Library, which is devoted to input parameters needed in calculations of nuclear reactions and nuclear data evaluations. Advanced modelling codes require substantial numerical input, therefore the International Atomic Energy Agency (IAEA) has worked extensively since 1993 on a library of validated nuclear-model input parameters, referred to as the Reference Input Parameter Library (RIPL). A final RIPL coordinated research project (RIPL-3) was brought to a successful conclusion in December 2008, after 15 years of challenging work carried out through three consecutive IAEA projects. The RIPL-3 library was released in January 2009, and is available on the Web through http://www-nds.iaea.org/RIPL-3/. This work and the resulting database are extremely important to theoreticians involved in the development and use of nuclear reaction modelling (ALICE, EMPIRE, GNASH, UNF, TALYS) both for theoretical research and nuclear data evaluations. The numerical data and computer codes included in RIPL-3 are arranged in seven segments: MASSES contains ground-state properties of nuclei for about 9000 nuclei, including three theoretical predictions of masses and the evaluated experimental masses of Audi et al. (2003). DISCRETE LEVELS contains 117 datasets (one for each element) with all known level schemes, electromagnetic and {gamma}-ray decay probabilities available from ENSDF in October 2007. NEUTRON RESONANCES contains average resonance parameters prepared on the basis of the evaluations performed by Ignatyuk and Mughabghab. OPTICAL MODEL contains 495 sets of phenomenological optical model parameters defined in a wide energy range. When there are insufficient experimental data, the evaluator has to resort to either global parameterizations or microscopic approaches. Radial density distributions to be used as input for microscopic calculations are stored in the MASSES segment. LEVEL DENSITIES contains

  17. PHYSICS

    CERN Multimedia

    Guenther Dissertori

    The time period between the last CMS week and this June was one of intense activity with numerous get-together targeted at addressing specific issues on the road to data-taking. The two series of workshops, namely the “En route to discoveries” series and the “Vertical Integration” meetings continued.   The first meeting of the “En route to discoveries” sequence (end 2007) had covered the measurements of the Standard Model signals as necessary prerequisite to any claim of signals beyond the Standard Model. The second meeting took place during the Feb CMS week and concentrated on the commissioning of the Physics Objects, whereas the third occurred during the April Physics Week – and this time the theme was the strategy for key new physics signatures. Both of these workshops are summarized below. The vertical integration meetings also continued, with two DPG-physics get-togethers on jets and missing ET and on electrons and photons. ...

  18. PHYSICS

    CERN Multimedia

    Chris Hill

    2012-01-01

    The months that have passed since the last CMS Bulletin have been a very busy and exciting time for CMS physics. We have gone from observing the very first 8TeV collisions produced by the LHC to collecting a dataset of the collisions that already exceeds that recorded in all of 2011. All in just a few months! Meanwhile, the analysis of the 2011 dataset and publication of the subsequent results has continued. These results come from all the PAGs in CMS, including searches for the Higgs boson and other new phenomena, that have set the most stringent limits on an ever increasing number of models of physics beyond the Standard Model including dark matter, Supersymmetry, and TeV-scale gravity scenarios, top-quark physics where CMS has overtaken the Tevatron in the precision of some measurements, and bottom-quark physics where CMS made its first discovery of a new particle, the Ξ*0b baryon (candidate event pictured below). Image 2:  A Ξ*0b candidate event At the same time POGs and PAGs...

  19. PHYSICS

    CERN Multimedia

    D. Acosta

    2011-01-01

    Since the last CMS Week, all physics groups have been extremely active on analyses based on the full 2010 dataset, with most aiming for a preliminary measurement in time for the winter conferences. Nearly 50 analyses were approved in a “marathon” of approval meetings during the first two weeks of March, and the total number of approved analyses reached 90. The diversity of topics is very broad, including precision QCD, Top, and electroweak measurements, the first observation of single Top production at the LHC, the first limits on Higgs production at the LHC including the di-tau final state, and comprehensive searches for new physics in a wide range of topologies (so far all with null results unfortunately). Most of the results are based on the full 2010 pp data sample, which corresponds to 36 pb-1 at √s = 7 TeV. This report can only give a few of the highlights of a very rich physics program, which is listed below by physics group...

  20. Some introductory formalizations on the affine Hilbert spaces model of the origin of life. I. On quantum mechanical measurement and the origin of the genetic code: a general physical framework theory.

    Science.gov (United States)

    Balázs, András

    2006-08-01

    A physical (affine Hilbert spaces) frame is developed for the discussion of the interdependence of the problem of the origin (symbolic assignment) of the genetic code and a possible endophysical (a kind of "internal") quantum measurement in an explicite way, following the general considerations of Balázs (Balázs, A., 2003. BioSystems 70, 43-54; Balázs, A., 2004a. BioSystems 73, 1-11). Using the Everett (a dynamic) interpretation of quantum mechanics, both the individual code assignment and the concatenated linear symbolism is discussed. It is concluded that there arises a skewed quantal probability field, with a natural dynamic non-linearity in codon assignment within the physical model adopted (essentially corresponding to a much discussed biochemical frame of self-catalyzed binding (charging) of t RNA like proto RNAs (ribozymes) with amino acids). This dynamic specific molecular complex assumption of individual code assignment, and the divergence of the code in relation to symbol concatenation, are discussed: our frame supports the former and interpret the latter as single-type codon (triplet), also unambiguous and extended assignment, selection in molecular evolution, corresponding to converging towards the fixedpoint of the internal dynamics of measurement, either in a protein- or RNA-world. In this respect, the general physical consequence is the introduction of a fourth rank semidiagonal energy tensor (see also Part II) ruling the internal dynamics as a non-linear in principle second-order one. It is inferred, as a summary, that if the problem under discussion could be expressed by the concepts of the Copenhagen interpretation of quantum mechanics in some yet not quite specified way, the matter would be particularly interesting with respect to both the origin of life and quantum mechanics, as a dynamically supported natural measurement-theoretical split between matter ("hardware") and (internal) symbolism ("software") aspects of living matter.

  1. Hierarchical Parallel Evaluation of a Hamming Code

    Directory of Open Access Journals (Sweden)

    Shmuel T. Klein

    2017-04-01

    Full Text Available The Hamming code is a well-known error correction code and can correct a single error in an input vector of size n bits by adding logn parity checks. A new parallel implementation of the code is presented, using a hierarchical structure of n processors in logn layers. All the processors perform similar simple tasks, and need only a few bytes of internal memory.

  2. An Exploration of Input Conditions for Virtual Teleportation

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Ruder, Kevin Vignola; Nilsson, Niels Chr.

    2017-01-01

    This poster describes a within-groups study (n=17) comparing participants' experience of three different input conditions for instigating virtual teleportation (button clicking, physical jumping, and fist clenching). The results indicated that teleportation by clicking a button generally required...

  3. Regional Hospital Input Price Indexes

    OpenAIRE

    Freeland, Mark S.; Schendler, Carol Ellen; Anderson, Gerard

    1981-01-01

    This paper describes the development of regional hospital input price indexes that is consistent with the general methodology used for the National Hospital Input Price Index. The feasibility of developing regional indexes was investigated because individuals inquired whether different regions experienced different rates of increase in hospital input prices. The regional indexes incorporate variations in cost-share weights (the amount an expense category contributes to total spending) associa...

  4. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  5. Generomak: Fusion physics, engineering and costing model

    Energy Technology Data Exchange (ETDEWEB)

    Delene, J.G.; Krakowski, R.A.; Sheffield, J.; Dory, R.A.

    1988-06-01

    A generic fusion physics, engineering and economics model (Generomak) was developed as a means of performing consistent analysis of the economic viability of alternative magnetic fusion reactors. The original Generomak model developed at Oak Ridge by Sheffield was expanded for the analyses of the Senior Committee on Environmental Safety and Economics of Magnetic Fusion Energy (ESECOM). This report describes the Generomak code as used by ESECOM. The input data used for each of the ten ESECOM fusion plants and the Generomak code output for each case is given. 14 refs., 3 figs., 17 tabs.

  6. [Prosody, speech input and language acquisition].

    Science.gov (United States)

    Jungheim, M; Miller, S; Kühn, D; Ptok, M

    2014-04-01

    In order to acquire language, children require speech input. The prosody of the speech input plays an important role. In most cultures adults modify their code when communicating with children. Compared to normal speech this code differs especially with regard to prosody. For this review a selective literature search in PubMed and Scopus was performed. Prosodic characteristics are a key feature of spoken language. By analysing prosodic features, children gain knowledge about underlying grammatical structures. Child-directed speech (CDS) is modified in a way that meaningful sequences are highlighted acoustically so that important information can be extracted from the continuous speech flow more easily. CDS is said to enhance the representation of linguistic signs. Taking into consideration what has previously been described in the literature regarding the perception of suprasegmentals, CDS seems to be able to support language acquisition due to the correspondence of prosodic and syntactic units. However, no findings have been reported, stating that the linguistically reduced CDS could hinder first language acquisition.

  7. Polar Codes

    Science.gov (United States)

    2014-12-01

    added by the decoder is K/ρ+Td. By the last assumption, Td and Te are both ≤ K/ρ, so the total latency added is between 2K/ρ and 4K /ρ. For example...better resolution near the decision point. Reference [12] showed that in decoding a (1024, 512) polar code, using 6-bit LLRs resulted in per- formance

  8. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  9. Hypoxia in the St. Lawrence Estuary: How a Coding Error Led to the Belief that “Physics Controls Spatial Patterns”

    NARCIS (Netherlands)

    Bourgault, D.; Cyr, F.

    2015-01-01

    Two fundamental sign errors were found in a computer code used for studying the oxygenminimum zone (OMZ) and hypoxia in the Estuary and Gulf of St. Lawrence. These errorsinvalidate the conclusions drawn from the model, and call into question a proposed mechanismfor generating OMZ that challenges

  10. Status of MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  11. PHYSICS

    CERN Multimedia

    C. Hill

    2012-01-01

      2012 has started off as a very busy year for the CMS Physics Groups. Planning for the upcoming higher luminosity/higher energy (8 TeV) operation of the LHC and relatively early Rencontres de Moriond are the high-priority activities for the group at the moment. To be ready for the coming 8-TeV data, CMS has made a concerted effort to perform and publish analyses on the 5 fb−1 dataset recorded in 2011. This has resulted in the submission of 16 papers already, including nine on the search for the Higgs boson. In addition, a number of preliminary results on the 2011 dataset have been released to the public. The Exotica and SUSY groups approved several searches for new physics in January, such as searches for W′ and exotic highly ionising particles. These were highlighted at a CERN seminar given on 24th  January. Many more analyses, from all the PAGs, including the newly formed SMP (Standard Model Physics) and FSQ (Forward and Small-x QCD), were approved in February. The ...

  12. PHYSICS

    CERN Multimedia

    C. Hill

    2012-01-01

      The period since the last CMS Bulletin has been historic for CMS Physics. The pinnacle of our physics programme was an observation of a new particle – a strong candidate for a Higgs boson – which has captured worldwide interest and made a profound impact on the very field of particle physics. At the time of the discovery announcement on 4 July, 2012, prominent signals were observed in the high-resolution H→γγ and H→ZZ(4l) modes. Corroborating excess was observed in the H→W+W– mode as well. The fermionic channel analyses (H→bb, H→ττ), however, yielded less than the Standard Model (SM) expectation. Collectively, the five channels established the signal with a significance of five standard deviations. With the exception of the diphoton channel, these analyses have all been updated in the last months and several new channels have been added. With improved analyses and more than twice the i...

  13. PHYSICS

    CERN Multimedia

    Darin Acosta

    2010-01-01

    The collisions last year at 900 GeV and 2.36 TeV provided the long anticipated collider data to the CMS physics groups. Quite a lot has been accomplished in a very short time. Although the delivered luminosity was small, CMS was able to publish its first physics paper (with several more in preparation), and commence the commissioning of physics objects for future analyses. Many new performance results have been approved in advance of this CMS Week. One remarkable outcome has been the amazing agreement between out-of-the-box data with simulation at these low energies so early in the commissioning of the experiment. All of this is testament to the hard work and preparation conducted beforehand by many people in CMS. These analyses could not have happened without the dedicated work of the full collaboration on building and commissioning the detector, computing, and software systems combined with the tireless work of many to collect, calibrate and understand the data and our detector. To facilitate the efficien...

  14. PHYSICS

    CERN Multimedia

    the PAG conveners

    2011-01-01

    The delivered LHC integrated luminosity of more than 1 inverse femtobarn by summer and more than 5 by the end of 2011 has been a gold mine for the physics groups. With 2011 data, we have submitted or published 14 papers, 7 others are in collaboration-wide review, and 75 Physics Analysis Summaries have been approved already. They add to the 73 papers already published based on the 2010 and 2009 datasets. Highlights from each physics analysis group are described below. Heavy ions Many important results have been obtained from the first lead-ion collision run in 2010. The published measurements include the first ever indications of Υ excited state suppression (PRL synopsis), long-range correlation in PbPb, and track multiplicity over a wide η range. Preliminary results include the first ever measurement of isolated photons (showing no modification), J/ψ suppression including the separation of the non-prompt component, further study of jet fragmentation, nuclear modification factor...

  15. PHYSICS

    CERN Multimedia

    L. Demortier

    Physics-wise, the CMS week in December was dominated by discussions of the analyses that will be carried out in the “next six months”, i.e. while waiting for the first LHC collisions.  As presented in December, analysis approvals based on Monte Carlo simulation were re-opened, with the caveat that for this work to be helpful to the goals of CMS, it should be carried out using the new software (CMSSW_2_X) and associated samples.  By the end of the week, the goal for the physics groups was set to be the porting of our physics commissioning methods and plans, as well as the early analyses (based an integrated luminosity in the range 10-100pb-1) into this new software. Since December, the large data samples from CMSSW_2_1 were completed. A big effort by the production group gave a significant number of events over the end-of-year break – but also gave out the first samples with the fast simulation. Meanwhile, as mentioned in December, the arrival of 2_2 meant that ...

  16. PHYSICS

    CERN Multimedia

    D. Acosta

    2010-01-01

    The Physics Groups are actively engaged on analyses of the first data from the LHC at 7 TeV, targeting many results for the ICHEP conference taking place in Paris this summer. The first large batch of physics approvals is scheduled for this CMS Week, to be followed by four more weeks of approvals and analysis updates leading to the start of the conference in July. Several high priority analysis areas were organized into task forces to ensure sufficient coverage from the relevant detector, object, and analysis groups in the preparation of these analyses. Already some results on charged particle correlations and multiplicities in 7 TeV minimum bias collisions have been approved. Only one small detail remains before ICHEP: further integrated luminosity delivered by the LHC! Beyond the Standard Model measurements that can be done with these data, the focus changes to the search for new physics at the TeV scale and for the Higgs boson in the period after ICHEP. Particle Flow The PFT group is focusing on the ...

  17. Benchmarking Tokamak edge modelling codes

    Science.gov (United States)

    Contributors To The Efda-Jet Work Programme; Coster, D. P.; Bonnin, X.; Corrigan, G.; Kirnev, G. S.; Matthews, G.; Spence, J.; Contributors to the EFDA-JET work programme

    2005-03-01

    Tokamak edge modelling codes are in widespread use to interpret and understand existing experiments, and to make predictions for future machines. Little direct benchmarking has been done between the codes, and the users of the codes have tended to concentrate on different experimental machines. An important validation step is to compare the codes for identical scenarios. In this paper, two of the major edge codes, SOLPS (B2.5-Eirene) and EDGE2D-NIMBUS are benchmarked against each other. A set of boundary conditions, transport coefficients, etc. for a JET plasma were chosen, and the two codes were run on the same grid. Initially, large differences were seen in the resulting plasmas. These differences were traced to differing physics assumptions with respect to the parallel heat flux limits. Once these were switched off in SOLPS, or implemented and switched on in EDGE2D-NIMBUS, the remaining differences were small.

  18. Error coding simulations in C

    Science.gov (United States)

    Noble, Viveca K.

    1994-10-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  19. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  20. Assessment on the characteristics of the analysis code for KALIMER PSDRS

    Energy Technology Data Exchange (ETDEWEB)

    Eoh, Jae Hyuk; Sim, Yoon Sub; Kim, Seong O.; Kim, Yeon Sik; Kim, Eui Kwang; Wi, Myung Hwan [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-04-01

    The PARS2 code was developed to analyze the RHR(Residual Heat Removal) system, especially PSDRS(Passive Safety Decay Heat Removal System), of KALIMER. In this report, preliminary verification and sensitivity analyses for PARS2 code were performed. From the results of the analyses, the PARS2 code has a good agreement with the experimental data of CRIEPI in the range of turbulent airside flow, and also the radiation heat transfer mode was well predicted. In this verification work, it was founded that the code calculation stopped in a very low air flowrate, and the numerical scheme related to the convergence of PARS2 code was adjusted to solve this problem. Through the sensitivity analysis on the PARS2 calculation results from the change of the input parameters, the pool-mixing coefficient related to the heat capacity of the structure in the system was improved such that the physical phenomenon can be well predicted. Also the initial conditions for the code calculation such as the hot and cold pool temperatures at the PSDRS commencing time were set up by using the transient analysis of the COMMIX code, and the surface emissivity of PSDRS was investigated and its permitted variation rage was set up. From this study, overall sensitivity characteristics of the PARS2 code were investigated and the results of the sensitivity analyses can be used in the design of the RHR system of KALIMER. 14 refs., 28 figs., 2 tabs. (Author)

  1. Convolutional-Code-Specific CRC Code Design

    OpenAIRE

    Lou, Chung-Yu; Daneshrad, Babak; Wesel, Richard D.

    2015-01-01

    Cyclic redundancy check (CRC) codes check if a codeword is correctly received. This paper presents an algorithm to design CRC codes that are optimized for the code-specific error behavior of a specified feedforward convolutional code. The algorithm utilizes two distinct approaches to computing undetected error probability of a CRC code used with a specific convolutional code. The first approach enumerates the error patterns of the convolutional code and tests if each of them is detectable. Th...

  2. PHYSICS

    CERN Multimedia

    Christopher Hill

    2013-01-01

    Since the last CMS Bulletin, the CMS Physics Analysis Groups have completed more than 70 new analyses, many of which are based on the complete Run 1 dataset. In parallel the Snowmass whitepaper on projected discovery potential of CMS for HL-LHC has been completed, while the ECFA HL-LHC future physics studies has been summarised in a report and nine published benchmark analyses. Run 1 summary studies on b-tag and jet identification, quark-gluon discrimination and boosted topologies have been documented in BTV-13-001 and JME-13-002/005/006, respectively. The new tracking alignment and performance papers are being prepared for submission as well. The Higgs analysis group produced several new results including the search for ttH with H decaying to ZZ, WW, ττ+bb (HIG-13-019/020) where an excess of ~2.5σ is observed in the like-sign di-muon channel, and new searches for high-mass Higgs bosons (HIG-13-022). Search for invisible Higgs decays have also been performed both using the associ...

  3. PHYSICS

    CERN Multimedia

    C. Hill

    2013-01-01

    In the period since the last CMS Bulletin, the LHC – and CMS – have entered LS1. During this time, CMS Physics Analysis Groups have performed more than 40 new analyses, many of which are based on the complete 8 TeV dataset delivered by the LHC in 2012 (and in some cases on the full Run 1 dataset). These results were shown at, and well received by, several high-profile conferences in the spring of 2013, including the inaugural meeting of the Large Hadron Collider    Physics Conference (LHCP) in Barcelona, and the 26th International Symposium on Lepton Photon Interactions at High Energies (LP) in San Francisco. In parallel, there have been significant developments in preparations for Run 2 of the LHC and on “future physics” studies for both Phase 1 and Phase 2 upgrades of the CMS detector. The Higgs analysis group produced five new results for LHCP including a new H-to-bb search in VBF production (HIG-13-011), ttH with H to γ&ga...

  4. PHYSICS

    CERN Multimedia

    J. D'Hondt

    The Electroweak and Top Quark Workshop (16-17th of July) A Workshop on Electroweak and Top Quark Physics, dedicated on early measurements, took place on 16th-17th July. We had more than 40 presentations at the Workshop, which was an important milestone for 2007 physics analyses in the EWK and TOP areas. The Standard Model has been tested empirically by many previous experiments. Observables which are nowadays known with high precision will play a major role for data-based CMS calibrations. A typical example is the use of the Z to monitor electron and muon reconstruction in di-lepton inclusive samples. Another example is the use of the W mass as a constraint for di-jets in the kinematic fitting of top-quark events, providing information on the jet energy scale. The predictions of the Standard Model, for what concerns proton collisions at the LHC, are accurate to a level that the production of W/Z and top-quark events can be used as a powerful tool to commission our experiment. On the other hand the measure...

  5. PHYSICS

    CERN Multimedia

    C. Hill

    2013-01-01

    The period since the last CMS bulletin has seen the end of proton collisions at a centre-of-mass energy 8 TeV, a successful proton-lead collision run at 5 TeV/nucleon, as well as a “reference” proton run at 2.76 TeV. With these final LHC Run 1 datasets in hand, CMS Physics Analysis Groups have been busy analysing these data in preparation for the winter conferences. Moreover, despite the fact that the pp run only concluded in mid-December (and there was consequently less time to complete data analyses), CMS again made a strong showing at the Rencontres de Moriond in La Thuile (EW and QCD) where nearly 40 new results were presented. The highlight of these preliminary results was the eagerly anticipated updated studies of the properties of the Higgs boson discovered in July of last year. Meanwhile, preparations for Run 2 and physics performance studies for Phase 1 and Phase 2 upgrade scenarios are ongoing. The Higgs analysis group produced updated analyses on the full Run 1 dataset (~25 f...

  6. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  7. Concatenated codes with convolutional inner codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Thommesen, Christian; Zyablov, Viktor

    1988-01-01

    The minimum distance of concatenated codes with Reed-Solomon outer codes and convolutional inner codes is studied. For suitable combinations of parameters the minimum distance can be lower-bounded by the product of the minimum distances of the inner and outer codes. For a randomized ensemble...... of concatenated codes a lower bound of the Gilbert-Varshamov type is proved...

  8. Integrated Fuel-Coolant Interaction (IFCI 6.0) code. User`s manual

    Energy Technology Data Exchange (ETDEWEB)

    Davis, F.J.; Young, M.F. [Sandia National Labs., Albuquerque, NM (United States)

    1994-04-01

    The integrated Fuel-Coolant interaction (IFCI) computer code is being developed at Sandia National Laboratories to investigate the fuel-coolant interaction (FCI) problem at large scale using a two-dimensional, four-field hydrodynamic framework and physically based models. IFCI will be capable of treating all major FCI processes in an integrated manner. This document is a product of the effort to generate a stand-alone version of IFCI, IFCI 6.0. The User`s Manual describes in detail the hydrodynamic method and physical models used in IFCI 6.0. Appendix A is an input manual, provided for the creation of working decks.

  9. Integrated Fuel-Coolant Interaction (IFCI 7.0) Code User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Michael F.

    1999-05-01

    The integrated fuel-coolant interaction (IFCI) computer code is being developed at Sandia National Laboratories to investigate the fuel-coolant interaction (FCI) problem at large scale using a two-dimensional, three-field hydrodynamic framework and physically based models. IFCI will be capable of treating all major FCI processes in an integrated manner. This document is a description of IFCI 7.0. The user's manual describes the hydrodynamic method and physical models used in IFCI 7.0. Appendix A is an input manual provided for the creation of working decks.

  10. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    Science.gov (United States)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  11. MIG version 0.0 model interface guidelines: Rules to accelerate installation of numerical models into any compliant parent code

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-08-01

    A set of model interface guidelines, called MIG, is presented as a means by which any compliant numerical material model can be rapidly installed into any parent code without having to modify the model subroutines. Here, {open_quotes}model{close_quotes} usually means a material model such as one that computes stress as a function of strain, though the term may be extended to any numerical operation. {open_quotes}Parent code{close_quotes} means a hydrocode, finite element code, etc. which uses the model and enforces, say, the fundamental laws of motion and thermodynamics. MIG requires the model developer (who creates the model package) to specify model needs in a standardized but flexible way. MIG includes a dictionary of technical terms that allows developers and parent code architects to share a common vocabulary when specifying field variables. For portability, database management is the responsibility of the parent code. Input/output occurs via structured calling arguments. As much model information as possible (such as the lists of required inputs, as well as lists of precharacterized material data and special needs) is supplied by the model developer in an ASCII text file. Every MIG-compliant model also has three required subroutines to check data, to request extra field variables, and to perform model physics. To date, the MIG scheme has proven flexible in beta installations of a simple yield model, plus a more complicated viscodamage yield model, three electromechanical models, and a complicated anisotropic microcrack constitutive model. The MIG yield model has been successfully installed using identical subroutines in three vectorized parent codes and one parallel C++ code, all predicting comparable results. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort, thereby reducing the cost of installing and sharing models in diverse new codes.

  12. M input radix p optical logic operations in a photorefractive BaTiO(3) crystal.

    Science.gov (United States)

    Park, H K; Kwon, W H; Lee, K Y; Eom, S Y

    1990-09-10

    A generalized coding algorithm for multiple inputs multiple-valued logic operations is proposed. This algorithm can reduce the unused pixels generated in the encoding process of the odd number of inputs. Using circle-type coded patterns and the correlation property in a photorefractive BaTiO(3) crystal, m input radix p logic operations are made possible. Detailed descriptions and advantages of the proposed method are presented and its effectiveness is demonstrated in the case of a three-input binary system. Uses for the proposed method, including binary half-adder/subtractor and binary multiplier, are given.

  13. PHYSICS

    CERN Multimedia

    V.Ciulli

    2011-01-01

    The main programme of the Physics Week held between 16th and 20th May was a series of topology-oriented workshops on di-leptons, di-photons, inclusive W, and all-hadronic final states. The goal of these workshops was to reach a common understanding for the set of objects (ID, cleaning...), the handling of pile-up, calibration, efficiency and purity determination, as well as to revisit critical common issues such as the trigger. Di-lepton workshop Most analysis groups use a di-lepton trigger or a combination of single and di-lepton triggers in 2011. Some groups need to collect leptons with as low PT as possible with strong isolation and identification requirements as for Higgs into WW at low mass, others with intermediate PT values as in Drell-Yan studies, or high PT as in the Exotica group. Electron and muon reconstruction, identification and isolation, was extensively described in the workshop. For electrons, VBTF selection cuts for low PT and HEEP cuts for high PT were discussed, as well as more complex d...

  14. MINET (momentum integral network) code documentation

    Energy Technology Data Exchange (ETDEWEB)

    Van Tuyle, G J; Nepsee, T C; Guppy, J G [Brookhaven National Lab., Upton, NY (USA)

    1989-12-01

    The MINET computer code, developed for the transient analysis of fluid flow and heat transfer, is documented in this four-part reference. In Part 1, the MINET models, which are based on a momentum integral network method, are described. The various aspects of utilizing the MINET code are discussed in Part 2, The User's Manual. The third part is a code description, detailing the basic code structure and the various subroutines and functions that make up MINET. In Part 4, example input decks, as well as recent validation studies and applications of MINET are summarized. 32 refs., 36 figs., 47 tabs.

  15. Tandem Mirror Reactor Systems Code (Version I)

    Energy Technology Data Exchange (ETDEWEB)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.; Barrett, R.J.; Gorker, G.E.; Spampinaton, P.T.; Bulmer, R.H.; Dorn, D.W.; Perkins, L.J.; Ghose, S.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost.

  16. Development and Verification of Smoothed Particle Hydrodynamics Code for Analysis of Tsunami near NPP

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Young Beom; Kim, Eung Soo [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    It becomes more complicated when considering the shape and phase of the ground below the seawater. Therefore, some different attempts are required to precisely analyze the behavior of tsunami. This paper introduces an on-going activities on code development in SNU based on an unconventional mesh-free fluid analysis method called Smoothed Particle Hydrodynamics (SPH) and its verification work with some practice simulations. This paper summarizes the on-going development and verification activities on Lagrangian mesh-free SPH code in SNU. The newly developed code can cover equation of motions and heat conduction equation so far, and verification of each models is completed. In addition, parallel computation using GPU is now possible, and GUI is also prepared. If users change input geometry or input values, they can simulate for various conditions geometries. A SPH method has large advantages and potential in modeling of free surface, highly deformable geometry and multi-phase problems that traditional grid-based code has difficulties in analysis. Therefore, by incorporating more complex physical models such as turbulent flow, phase change, two-phase flow, and even solid mechanics, application of the current SPH code is expected to be much more extended including molten fuel behaviors in the sever accident.

  17. Securing mobile code.

    Energy Technology Data Exchange (ETDEWEB)

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik

    2004-10-01

    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called &apos

  18. Introduction to the simulation with MCNP Monte Carlo code and its applications in Medical Physics; Introduccion a la simulacion con el codigo de Monte Carlo MCNP y sus aplicaciones en Fisica Medica

    Energy Technology Data Exchange (ETDEWEB)

    Parreno Z, F.; Paucar J, R.; Picon C, C. [Instituto Peruano de Energia Nuclear, Av. Canada 1470, San Borja, Lima 41 (Peru)

    1998-12-31

    The simulation by Monte Carlo is tool which Medical Physics counts with it for the development of its research, the interest by this tool is growing, as we may observe in the main scientific journals for the years 1995-1997 where more than 27 % of the papers treat over Monte Carlo and/or its applications in the radiation transport.In the Peruvian Institute of Nuclear Energy we are implementing and making use of the MCNP4 and EGS4 codes. In this work are presented the general features of the Monte Carlo method and its more useful applications in Medical Physics. Likewise, it is made a simulation of the calculation of isodose curves in an interstitial treatment with Ir-192 wires in a mammary gland carcinoma. (Author)

  19. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  20. A surface definition code for turbine blade surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S L [Michigan Technological Univ., Houghton, MI (United States); Oryang, D; Ho, M J [Tuskegee Univ., AL (United States)

    1992-05-01

    A numerical interpolation scheme has been developed for generating the three-dimensional geometry of wind turbine blades. The numerical scheme consists of (1) creating the frame of the blade through the input of two or more airfoils at some specific spanwise stations and then scaling and twisting them according to the prescribed distributions of chord, thickness, and twist along the span of the blade; (2) transforming the physical coordinates of the blade frame into a computational domain that complies with the interpolation requirements; and finally (3) applying the bi-tension spline interpolation method, in the computational domain, to determine the coordinates of any point on the blade surface. Detailed descriptions of the overall approach to and philosophy of the code development are given along with the operation of the code. To show the usefulness of the bi-tension spline interpolation code developed, two examples are given, namely CARTER and MICON blade surface generation. Numerical results are presented in both graphic data forms. The solutions obtained in this work show that the computer code developed can be a powerful tool for generating the surface coordinates for any three-dimensional blade.

  1. Physical Map Location of the Multicopy Genes Coding for Ammonia Monooxygenase and Hydroxylamine Oxidoreductase in the Ammonia-Oxidizing Bacterium Nitrosomonas sp. Strain ENI-11

    Science.gov (United States)

    Hirota, Ryuichi; Yamagata, Akira; Kato, Junichi; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao

    2000-01-01

    Pulsed-field gel electrophoresis of PmeI digests of the Nitrosomonas sp. strain ENI-11 chromosome produced four bands ranging from 1,200 to 480 kb in size. Southern hybridizations suggested that a 487-kb PmeI fragment contained two copies of the amoCAB genes, coding for ammonia monooxygenase (designated amoCAB1 and amoCAB2), and three copies of the hao gene, coding for hydroxylamine oxidoreductase (hao1, hao2, and hao3). In this DNA fragment, amoCAB1 and amoCAB2 were about 390 kb apart, while hao1, hao2, and hao3 were separated by at least about 100 kb from each other. Interestingly, hao1 and hao2 were located relatively close to amoCAB1 and amoCAB2, respectively. DNA sequence analysis revealed that hao1 and hao2 shared 160 identical nucleotides immediately upstream of each translation initiation codon. However, hao3 showed only 30% nucleotide identity in the 160-bp corresponding region. PMID:10633121

  2. Sensorimotor transformation via sparse coding

    Science.gov (United States)

    Takiyama, Ken

    2015-01-01

    Sensorimotor transformation is indispensable to the accurate motion of the human body in daily life. For instance, when we grasp an object, the distance from our hands to an object needs to be calculated by integrating multisensory inputs, and our motor system needs to appropriately activate the arm and hand muscles to minimize the distance. The sensorimotor transformation is implemented in our neural systems, and recent advances in measurement techniques have revealed an important property of neural systems: a small percentage of neurons exhibits extensive activity while a large percentage shows little activity, i.e., sparse coding. However, we do not yet know the functional role of sparse coding in sensorimotor transformation. In this paper, I show that sparse coding enables complete and robust learning in sensorimotor transformation. In general, if a neural network is trained to maximize the performance on training data, the network shows poor performance on test data. Nevertheless, sparse coding renders compatible the performance of the network on both training and test data. Furthermore, sparse coding can reproduce reported neural activities. Thus, I conclude that sparse coding is necessary and a biologically plausible factor in sensorimotor transformation. PMID:25923980

  3. A cortical sparse distributed coding model linking mini- and macrocolumn-scale functionality

    Directory of Open Access Journals (Sweden)

    Gerard J Rinkus

    2010-06-01

    Full Text Available No generic function for the minicolumn—i.e., one that would apply equally well to all cortical areas and species—has yet been proposed. I propose that the minicolumn does have a generic functionality, which only becomes clear when seen in the context of the function of the higher-level, subsuming unit, the macrocolumn. I propose that: a a macrocolumn’s function is to store sparse distributed representations of its inputs and to be a recognizer of those inputs; and b the generic function of the minicolumn is to enforce macrocolumnar code sparseness. The minicolumn, defined here as a physically localized pool of ~20 L2/3 pyramidals, does this by acting as a winner-take-all (WTA competitive module, implying that macrocolumnar codes consist of ~70 active L2/3 cells, assuming ~70 minicolumns per macrocolumn. I describe an algorithm for activating these codes during both learning and retrievals, which causes more similar inputs to map to more highly intersecting codes, a property which yields ultra-fast (immediate, first-shot storage and retrieval. The algorithm achieves this by adding an amount of randomness (noise into the code selection process, which is inversely proportional to an input’s familiarity. I propose a possible mapping of the algorithm onto cortical circuitry, and adduce evidence for a neuromodulatory implementation of this familiarity-contingent noise mechanism. The model is distinguished from other recent columnar cortical circuit models in proposing a generic minicolumnar function in which a group of cells within the minicolumn, the L2/3 pyramidals, compete (WTA to be part of the macrocolumnar code.

  4. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  5. Rapid installation of numerical models in multiple parent codes

    Energy Technology Data Exchange (ETDEWEB)

    Brannon, R.M.; Wong, M.K.

    1996-10-01

    A set of``model interface guidelines``, called MIG, is offered as a means to more rapidly install numerical models (such as stress-strain laws) into any parent code (hydrocode, finite element code, etc.) without having to modify the model subroutines. The model developer (who creates the model package in compliance with the guidelines) specifies the model`s input and storage requirements in a standardized way. For portability, database management (such as saving user inputs and field variables) is handled by the parent code. To date, NUG has proved viable in beta installations of several diverse models in vectorized and parallel codes written in different computer languages. A NUG-compliant model can be installed in different codes without modifying the model`s subroutines. By maintaining one model for many codes, MIG facilitates code-to-code comparisons and reduces duplication of effort potentially reducing the cost of installing and sharing models.

  6. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  7. Development of an anthropomorphic model and a Monte Carlo code dedicated to physical reconstruction of radiological accident; Developpement d'un modele anthropomorphe et d'un code de calcul Monte Carlo dedies a la reconstitution physique d'accident radiologique

    Energy Technology Data Exchange (ETDEWEB)

    Roux, A

    2000-09-01

    The diversity of radiological accidents is such that medical prognoses and decisions regarding the choice of treatment are hard to make on the basis of clinical observations alone. To supplement this information, it is important to know the overall dose received by the organism and the dose distributions. The dose can be estimated by physically reconstructing the accident using experimental techniques or calculations. It must be possible to adapt them to various accident situations, which means that a large quantity of parameters, both geometrical (morphology and posture of the victim, environment etc.) and physical (energy spectrum, source geometry etc.) have to be available. The software used to construct the geometry, MGED, used in conjunction with MORSE, a Monte Carlo transport code, meets these requirements. The first has led to the development of an anthropomorphic model known as MANDRAC, which can be adapted to the size and position of the victim. The second, generates and transports source particles, and also reproduces physical interaction phenomena. This work describes the development and characterisation of these two tools, in order to obtain a definition of the methodologies which have been adapted and optimised depending on the type of accident (overall or localised irradiation). To begin with (Section III), the computer code is validated and the uncertainties associated with its use assessed. The study which follows focuses on the model and the uncertainties arising from geometry (Section IV). It also makes it possible to assess the influence of morphological parameters on dose calculation. One important result of the study is that it determines the parameters which have to be known depending on the type of accident and the degree of accuracy. All the tools are then compared to those used in the field of medical dosimetry (Section V). Several applications to accidents are used to assess this technique in actual situations (Section VI). (author)

  8. Imagined Physics

    DEFF Research Database (Denmark)

    Merritt, Timothy; Nørgaard, Mie; Laursen, Christian

    2015-01-01

    to this book focuses on the human responses to objects that change shape in response to input from users, environment, or other circumstances. In this chapter we discuss the term "imagined physics", meaning how actuated devices are in one sense tied to their physical form, yet through the use of actuators...

  9. Determination of the physical parameters of the nuclear subcritical assembly Chicago 9000 of the IPN using the Serpent code; Determinacion de los parametros fisicos del conjunto subcritico nuclear Chicago 9000 del IPN usando el codigo SERPENT

    Energy Technology Data Exchange (ETDEWEB)

    Arriaga R, L.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. Instituto Politecnico Nacional s/n, U.P. Adolfo Lopez Mateos, Col. San Pedro Zacatenco, 07738 Mexico D. F. (Mexico); Gomez T, A. M., E-mail: guten_tag_04@hotmail.com [ININ, Departamento de Sistemas Nucleares, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2013-10-15

    For the Serpent code was developed the three-dimensional model corresponding to the nuclear subcritical assembly (S A) Chicago 9000 of the Escuela Superior de Fisica y Matematicas del Instituto Politecnico Nacional (ESFM-IPN). The model includes: a) the core, formed by 312 aluminum pipes that contain 5 nuclear fuel rods (natural uranium in metallic form), b) the multi-perforated plates where they penetrate the inferior part of each pipe to be able to remain in vertical form, c) water, acting as moderator and reflector, and d) the recipient lodging to the core. The pipes arrangement is hexagonal although the transversal section of the recipient that lodges to the core is circular. The entrance file for the Serpent code was generated with the data provided by the manual of the S A use about the composition and density of the fuel rods and others obtained in direct form of the rods, as the interior and external diameter, mass and height. Of the obtained physical parameters, those more approached to that reported in the manual of the subcritical assembly are the effective multiplication factor and the reproduction factor η. The differences can be because the description of the fuel rods provided by the manual of the S A use do not correspond those that are physically in the S A core. This difference consists on the presence of a circular central channel of 1.245 diameter centimeters in each fuel rod. The fuel rods reported in the mentioned manual do not have that channel. Although the obtained results are encouraging, we want to continue improving the model to incorporate in this the detectors, defined this way by the Serpent code, which could determine the existent neutrons flux in diverse points of interest like the axial or radial aligned points and to compare these with those that are obtained in an experimental way when a generating neutrons source (Pu-Be) is introduced. Added to this effort the cross sections for each unitary cell will be determined, so that

  10. EMPIRE: A code for nuclear astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Palumbo, A. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2013-12-11

    The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.

  11. Transmutation Fuel Performance Code Thermal Model Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gregory K. Miller; Pavel G. Medvedev

    2007-09-01

    FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.

  12. User's Manual for LINER: FORTRAN Code for the Numerical Simulation of Plane Wave Propagation in a Lined Two-Dimensional Channel

    Science.gov (United States)

    Reichert, R, S.; Biringen, S.; Howard, J. E.

    1999-01-01

    LINER is a system of Fortran 77 codes which performs a 2D analysis of acoustic wave propagation and noise suppression in a rectangular channel with a continuous liner at the top wall. This new implementation is designed to streamline the usage of the several codes making up LINER, resulting in a useful design tool. Major input parameters are placed in two main data files, input.inc and nurn.prm. Output data appear in the form of ASCII files as well as a choice of GNUPLOT graphs. Section 2 briefly describes the physical model. Section 3 discusses the numerical methods; Section 4 gives a detailed account of program usage, including input formats and graphical options. A sample run is also provided. Finally, Section 5 briefly describes the individual program files.

  13. World Input-Output Network.

    Directory of Open Access Journals (Sweden)

    Federica Cerina

    Full Text Available Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD is one of the first efforts to construct the global multi-regional input-output (GMRIO tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  14. Analog Input Data Acquisition Software

    Science.gov (United States)

    Arens, Ellen

    2009-01-01

    DAQ Master Software allows users to easily set up a system to monitor up to five analog input channels and save the data after acquisition. This program was written in LabVIEW 8.0, and requires the LabVIEW runtime engine 8.0 to run the executable.

  15. Lab Inputs for Common Micros.

    Science.gov (United States)

    Tinker, Robert

    1984-01-01

    The game paddle inputs of Apple microcomputers provide a simple way to get laboratory measurements into the computer. Discusses these game paddles and the necessary interface software. Includes schematics for Apple built-in paddle electronics, TRS-80 game paddle I/O, Commodore circuit for user port, and bus interface for Sinclair/Timex, Commodore,…

  16. World Input-Output Network.

    Science.gov (United States)

    Cerina, Federica; Zhu, Zhen; Chessa, Alessandro; Riccaboni, Massimo

    2015-01-01

    Production systems, traditionally analyzed as almost independent national systems, are increasingly connected on a global scale. Only recently becoming available, the World Input-Output Database (WIOD) is one of the first efforts to construct the global multi-regional input-output (GMRIO) tables. By viewing the world input-output system as an interdependent network where the nodes are the individual industries in different economies and the edges are the monetary goods flows between industries, we analyze respectively the global, regional, and local network properties of the so-called world input-output network (WION) and document its evolution over time. At global level, we find that the industries are highly but asymmetrically connected, which implies that micro shocks can lead to macro fluctuations. At regional level, we find that the world production is still operated nationally or at most regionally as the communities detected are either individual economies or geographically well defined regions. Finally, at local level, for each industry we compare the network-based measures with the traditional methods of backward linkages. We find that the network-based measures such as PageRank centrality and community coreness measure can give valuable insights into identifying the key industries.

  17. Remote input/output station

    CERN Multimedia

    1972-01-01

    A general view of the remote input/output station installed in building 112 (ISR) and used for submitting jobs to the CDC 6500 and 6600. The card reader on the left and the line printer on the right are operated by programmers on a self-service basis.

  18. Index coding via linear programming

    CERN Document Server

    Blasiak, Anna; Lubetzky, Eyal

    2010-01-01

    Index Coding has received considerable attention recently motivated in part by applications such as fast video-on-demand and efficient communication in wireless networks and in part by its connection to Network Coding. The basic setting of Index Coding encodes the side-information relation, the problem input, as an undirected graph and the fundamental parameter is the broadcast rate $\\beta$, the average communication cost per bit for sufficiently long messages (i.e. the non-linear vector capacity). Recent nontrivial bounds on $\\beta$ were derived from the study of other Index Coding capacities (e.g. the scalar capacity $\\beta_1$) by Bar-Yossef et al (FOCS'06), Lubetzky and Stav (FOCS'07) and Alon et al (FOCS'08). However, these indirect bounds shed little light on the behavior of $\\beta$ and its exact value remained unknown for \\emph{any graph} where Index Coding is nontrivial. Our main contribution is a hierarchy of linear programs whose solutions trap $\\beta$ between them. This enables a direct information-...

  19. PROSA-1: a probabilistic response-surface analysis code. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J. K.; Mueller, C.

    1978-06-01

    Techniques for probabilistic response-surface analysis have been developed to obtain the probability distributions of the consequences of postulated nuclear-reactor accidents. The uncertainties of the consequences are caused by the variability of the system and model input parameters used in the accident analysis. Probability distributions are assigned to the input parameters, and parameter values are systematically chosen from these distributions. These input parameters are then used in deterministic consequence analyses performed by mechanistic accident-analysis codes. The results of these deterministic consequence analyses are used to generate the coefficients for analytical functions that approximate the consequences in terms of the selected input parameters. These approximating functions are used to generate the probability distributions of the consequences with random sampling being used to obtain values for the accident parameters from their distributions. A computer code PROSA has been developed for implementing the probabilistic response-surface technique. Special features of the code generate or treat sensitivities, statistical moments of the input and output variables, regionwise response surfaces, correlated input parameters, and conditional distributions. The code can also be used for calculating important distributions of the input parameters. The use of the code is illustrated in conjunction with the fast-running accident-analysis code SACO to provide probability studies of LMFBR hypothetical core-disruptive accidents. However, the methods and the programming are general and not limited to such applications.

  20. Measurement of antiproton production in p-He collisions and prospects for other inputs to cosmic rays physics from the fixed target program of the LHCb experiment

    CERN Document Server

    Graziani, Giacomo

    2018-01-01

    The LHCb experiment has the unique possibility, among the LHC experiments, to be operated in fixed target mode, using its internal gas target SMOG. The energy scale achievable at the LHC and the excellent detector capabilities for vertexing, tracking and particle identification allow a wealth of measurements of great interest for cosmic ray physics. We present the first measurement of antiproton production in proton-helium collisions at $\\sqrt s_{NN} = 110$ GeV, which allows to improve the accuracy of the prediction for secondary antiproton production in cosmic rays. Prospects for other measurements achievable in the fixed target program are also discussed.

  1. Systems and methods for reconfiguring input devices

    Science.gov (United States)

    Lancaster, Jeff (Inventor); De Mers, Robert E. (Inventor)

    2012-01-01

    A system includes an input device having first and second input members configured to be activated by a user. The input device is configured to generate activation signals associated with activation of the first and second input members, and each of the first and second input members are associated with an input function. A processor is coupled to the input device and configured to receive the activation signals. A memory coupled to the processor, and includes a reconfiguration module configured to store the input functions assigned to the first and second input members and, upon execution of the processor, to reconfigure the input functions assigned to the input members when the first input member is inoperable.

  2. Development of an anthropomorphic model and a Monte Carlo calculation code devoted to the physical reconstruction of a radiological accident; Developpement d'un modele anthropomorphe et d'un code de calcul Monte Carlo dedies a la reconstitution physique d'accident radiologique

    Energy Technology Data Exchange (ETDEWEB)

    Roux, A. [Universite Paul Sabatier, 31 - Toulouse (France)

    2001-03-01

    The diversity of radiological accidents makes difficult the medical prognosis and the therapy choice from only clinical observations. To complete this information, it is important to know the global dose received by the organism and the dose distributions in depth in tissues. The dose estimation can be made by a physical reconstruction of the accident with the help of tools based on experimental techniques or on calculation. The software of the geometry construction (M.G.E.D.), associated to the Monte-Carlo code of photons and neutrons transport (M.O.R.S.E.) replies these constraints. An important result of this work is to determine the principal parameters to know in function of the accident type, as well as the precision level required for these parameters. (N.C.)

  3. Breaking the Code: The Creative Use of QR Codes to Market Extension Events

    Science.gov (United States)

    Hill, Paul; Mills, Rebecca; Peterson, GaeLynn; Smith, Janet

    2013-01-01

    The use of smartphones has drastically increased in recent years, heralding an explosion in the use of QR codes. The black and white square barcodes that link the physical and digital world are everywhere. These simple codes can provide many opportunities to connect people in the physical world with many of Extension online resources. The…

  4. SCALE: A modular code system for performing standardized computer analyses for licensing evaluation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    This Manual represents Revision 5 of the user documentation for the modular code system referred to as SCALE. The history of the SCALE code system dates back to 1969 when the current Computational Physics and Engineering Division at Oak Ridge National Laboratory (ORNL) began providing the transportation package certification staff at the U.S. Atomic Energy Commission with computational support in the use of the new KENO code for performing criticality safety assessments with the statistical Monte Carlo method. From 1969 to 1976 the certification staff relied on the ORNL staff to assist them in the correct use of codes and data for criticality, shielding, and heat transfer analyses of transportation packages. However, the certification staff learned that, with only occasional use of the codes, it was difficult to become proficient in performing the calculations often needed for an independent safety review. Thus, shortly after the move of the certification staff to the U.S. Nuclear Regulatory Commission (NRC), the NRC staff proposed the development of an easy-to-use analysis system that provided the technical capabilities of the individual modules with which they were familiar. With this proposal, the concept of the Standardized Computer Analyses for Licensing Evaluation (SCALE) code system was born. This manual covers an array of modules written for the SCALE package, consisting of drivers, system libraries, cross section and materials properties libraries, input/output routines, storage modules, and help files.

  5. Adaptive Channel‐Matched Extended Alamouti Space‐Time Code Exploiting Partial Feedback

    National Research Council Canada - National Science Library

    Badic, Biljana; Rupp, Markus; Weinrichter, Hans

    2004-01-01

    Since the publication of Alamouti's famous space‐time block code, various quasi‐orthogonal space‐time block codes (QSTBC) for multi‐input multi‐output (MIMO) fading channels for more than two transmit antennas have been proposed...

  6. Affine Grassmann codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant

    2010-01-01

    We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show...

  7. Detecting Malicious Code by Binary File Checking

    Directory of Open Access Journals (Sweden)

    Marius POPA

    2014-01-01

    Full Text Available The object, library and executable code is stored in binary files. Functionality of a binary file is altered when its content or program source code is changed, causing undesired effects. A direct content change is possible when the intruder knows the structural information of the binary file. The paper describes the structural properties of the binary object files, how the content can be controlled by a possible intruder and what the ways to identify malicious code in such kind of files. Because the object files are inputs in linking processes, early detection of the malicious content is crucial to avoid infection of the binary executable files.

  8. Laser physics

    CERN Document Server

    Milonni, Peter W

    2010-01-01

    Create physically realistic 3D Graphics environments with this introduction to the ideas and techniques behind the process. Author David H. Eberly includes simulations to introduce the key problems involved and then gradually reveals the mathematical and physical concepts needed to solve them. He then describes all the algorithmic foundations and uses code examples and working source code to show how they are implemented, culminating in a large collection of physical simulations. The book tackles the complex, challenging issues that other books avoid, including Lagrangian dynamics, rigid body

  9. Computer codes for evaluation of control room habitability (HABIT)

    Energy Technology Data Exchange (ETDEWEB)

    Stage, S.A. [Pacific Northwest Lab., Richland, WA (United States)

    1996-06-01

    This report describes the Computer Codes for Evaluation of Control Room Habitability (HABIT). HABIT is a package of computer codes designed to be used for the evaluation of control room habitability in the event of an accidental release of toxic chemicals or radioactive materials. Given information about the design of a nuclear power plant, a scenario for the release of toxic chemicals or radionuclides, and information about the air flows and protection systems of the control room, HABIT can be used to estimate the chemical exposure or radiological dose to control room personnel. HABIT is an integrated package of several programs that previously needed to be run separately and required considerable user intervention. This report discusses the theoretical basis and physical assumptions made by each of the modules in HABIT and gives detailed information about the data entry windows. Sample runs are given for each of the modules. A brief section of programming notes is included. A set of computer disks will accompany this report if the report is ordered from the Energy Science and Technology Software Center. The disks contain the files needed to run HABIT on a personal computer running DOS. Source codes for the various HABIT routines are on the disks. Also included are input and output files for three demonstration runs.

  10. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  11. Turbo Codes Extended with Outer BCH Code

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1996-01-01

    The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is propose...... including an outer BCH code correcting a few bit errors.......The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...

  12. Distribution Development for STORM Ingestion Input Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Fulton, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-07-01

    The Sandia-developed Transport of Radioactive Materials (STORM) code suite is used as part of the Radioisotope Power System Launch Safety (RPSLS) program to perform statistical modeling of the consequences due to release of radioactive material given a launch accident. As part of this modeling, STORM samples input parameters from probability distributions with some parameters treated as constants. This report described the work done to convert four of these constant inputs (Consumption Rate, Average Crop Yield, Cropland to Landuse Database Ratio, and Crop Uptake Factor) to sampled values. Consumption rate changed from a constant value of 557.68 kg / yr to a normal distribution with a mean of 102.96 kg / yr and a standard deviation of 2.65 kg / yr. Meanwhile, Average Crop Yield changed from a constant value of 3.783 kg edible / m 2 to a normal distribution with a mean of 3.23 kg edible / m 2 and a standard deviation of 0.442 kg edible / m 2 . The Cropland to Landuse Database ratio changed from a constant value of 0.0996 (9.96%) to a normal distribution with a mean value of 0.0312 (3.12%) and a standard deviation of 0.00292 (0.29%). Finally the crop uptake factor changed from a constant value of 6.37e-4 (Bq crop /kg)/(Bq soil /kg) to a lognormal distribution with a geometric mean value of 3.38e-4 (Bq crop /kg)/(Bq soil /kg) and a standard deviation value of 3.33 (Bq crop /kg)/(Bq soil /kg)

  13. Parameter Estimation of Turbo Code Encoder

    Directory of Open Access Journals (Sweden)

    Mehdi Teimouri

    2014-01-01

    Full Text Available The problem of reconstruction of a channel code consists of finding out its design parameters solely based on its output. This paper investigates the problem of reconstruction of parallel turbo codes. Reconstruction of a turbo code has been addressed in the literature assuming that some of the parameters of the turbo encoder, such as the number of input and output bits of the constituent encoders and puncturing pattern, are known. However in practical noncooperative situations, these parameters are unknown and should be estimated before applying reconstruction process. Considering such practical situations, this paper proposes a novel method to estimate the above-mentioned code parameters. The proposed algorithm increases the efficiency of the reconstruction process significantly by judiciously reducing the size of search space based on an analysis of the observed channel code output. Moreover, simulation results show that the proposed algorithm is highly robust against channel errors when it is fed with noisy observations.

  14. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  15. Generalized concatenated quantum codes

    Science.gov (United States)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng, Bei

    2009-05-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  16. Number-Crunching Software and the Input Problem: Guidelines and a Case Study

    Directory of Open Access Journals (Sweden)

    Stefan Gerber

    1994-01-01

    Full Text Available Most of the number-crunching software running on supercomputers lack a concept for an error-free input processing. The programs often terminate after several hours due to user input errors. We present an analysis of this input problem. First, we separate the input parsing system from the number-crunching code. Second, we define a general grammar suitable for an automatic generation of a parsing system using the LEX/YACC tools. This concept leads to an input system that is easy to use and the generated input data for the number-crunching software is as error free as possible. We discuss the implementation of such an input system for an ab initio quantum chemical package.

  17. Predictive coding in Agency Detection

    DEFF Research Database (Denmark)

    Andersen, Marc Malmdorf

    2017-01-01

    , unbeknownst to consciousness, engages in sophisticated Bayesian statistics in an effort to constantly predict the hidden causes of sensory input. My fundamental argument is that most false positives in agency detection can be seen as the result of top-down interference in a Bayesian system generating high...... prior probabilities in the face of unreliable stimuli, and that such a system can better account for the experimental evidence than previous accounts of a dedicated agency detection system. Finally, I argue that adopting predictive coding as a theoretical framework has radical implications......Agency detection is a central concept in the cognitive science of religion (CSR). Experimental studies, however, have so far failed to lend support to some of the most common predictions that follow from current theories on agency detection. In this article, I argue that predictive coding, a highly...

  18. Introduction of the ASGARD Code

    Science.gov (United States)

    Bethge, Christian; Winebarger, Amy; Tiwari, Sanjiv; Fayock, Brian

    2017-01-01

    ASGARD stands for 'Automated Selection and Grouping of events in AIA Regional Data'. The code is a refinement of the event detection method in Ugarte-Urra & Warren (2014). It is intended to automatically detect and group brightenings ('events') in the AIA EUV channels, to record event parameters, and to find related events over multiple channels. Ultimately, the goal is to automatically determine heating and cooling timescales in the corona and to significantly increase statistics in this respect. The code is written in IDL and requires the SolarSoft library. It is parallelized and can run with multiple CPUs. Input files are regions of interest (ROIs) in time series of AIA images from the JSOC cutout service (http://jsoc.stanford.edu/ajax/exportdata.html). The ROIs need to be tracked, co-registered, and limited in time (typically 12 hours).

  19. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  20. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  1. Code comparison of transport simulations of heavy ion collisions at low and intermediate energies

    Science.gov (United States)

    Wolter, H. H.

    2017-11-01

    Transport simulations are an important and successful tool to extract information on the equation of state of nuclear matter away from saturation conditions from heavy ion collisions. However, at times, such calculations with seemingly similar physical input have yielded different conclusions. Therefore it is deemed important to compare transport simulations under controlled conditions, which is the objective of the Transport Simulation Code Comparison project, on which we report here. We obtain for the first time a quantitative systematic error of transport simulations. We discuss possible reasons for these deviations and further comparisons to improve the situation.

  2. 7 CFR 3430.607 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.607 Section 3430.607... § 3430.607 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums (e.g., public meetings, request for input and/or via Web site), as well as through a notice in the...

  3. 7 CFR 3430.907 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.907 Section 3430.907... Program § 3430.907 Stakeholder input. CSREES shall seek and obtain stakeholder input through a variety of forums (e.g., public meetings, requests for input and/or Web site), as well as through a notice in the...

  4. Inserção da Educação Física na área de Linguagens, Códigos e suas Tecnologias Inserting Physical Education in the area of Languages, codes and their technologies

    Directory of Open Access Journals (Sweden)

    Marlene de Fátima dos Santos

    2012-09-01

    Full Text Available Os Parâmetros Curriculares Nacionais para o Ensino Médio introduziram a Educação Física na área de Linguagens, Códigos e suas Tecnologias, juntamente com as disciplinas Língua Portuguesa, Artes, Informática, Literatura e Língua Estrangeira Moderna. Diante disso, este trabalho se propôs a investigar como professores das referidas disciplinas interpretam a inserção da Educação Física nessa área. Participaram deste estudo de caso qualitativo 16 professores de três escolas públicas estaduais, com mais de 1.500 alunos. Os resultados, obtidos por meio de questionário, evidenciam a dificuldade dos professores de trabalhar interdisciplinarmente, de identificar as relações possíveis entre as disciplinas e, inclusive, de reconhecer a Educação Física como integrante da área de Linguagens, Códigos e suas Tecnologias. Essas constatações configuram, portanto, aspectos relevantes a serem considerados no desafio de implementação das proposições legais nas escolas da Educação Básica.The High School National Curriculum Parameters introduced Physical Education in the area of Languages, Codes and their Technologies, along with other subjects like Portuguese language, Arts, Information Technology, Literature and Foreign Modern Language. Thus, this study aimed to investigate how teachers understand Physical Education in this area. For achieving this, a qualitative case study was done, in which 16 teachers from three public state schools with over 1.500 students. The results, obtained through a questionnaire, demonstrate the teachers' difficulty in developing interdisciplinary projects, in identifying the possible relations among disciplines, and even in recognizing Physical Education as part of the area of Languages, Codes, and their Technologies. These are, therefore, important issues to be considered in the challenge of implementing the legal propositions in the schools of Primary Education.

  5. Standardized Definitions for Code Verification Test Problems

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-14

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  6. Reusable State Machine Code Generator

    Science.gov (United States)

    Hoffstadt, A. A.; Reyes, C.; Sommer, H.; Andolfato, L.

    2010-12-01

    The State Machine model is frequently used to represent the behaviour of a system, allowing one to express and execute this behaviour in a deterministic way. A graphical representation such as a UML State Chart diagram tames the complexity of the system, thus facilitating changes to the model and communication between developers and domain experts. We present a reusable state machine code generator, developed by the Universidad Técnica Federico Santa María and the European Southern Observatory. The generator itself is based on the open source project architecture, and uses UML State Chart models as input. This allows for a modular design and a clean separation between generator and generated code. The generated state machine code has well-defined interfaces that are independent of the implementation artefacts such as the middle-ware. This allows using the generator in the substantially different observatory software of the Atacama Large Millimeter Array and the ESO Very Large Telescope. A project-specific mapping layer for event and transition notification connects the state machine code to its environment, which can be the Common Software of these projects, or any other project. This approach even allows to automatically create tests for a generated state machine, using techniques from software testing, such as path-coverage.

  7. Validation issues for SSI codes

    Energy Technology Data Exchange (ETDEWEB)

    Philippacopoulos, A.J.

    1995-02-01

    The paper describes the results of a recent work which was performed to verify computer code predictions in the SSI area. The first part of the paper is concerned with analytic solutions of the system response. The mathematical derivations are reasonably reduced by the use of relatively simple models which capture fundamental ingredients of the physics of the system motion while allowing for the response to be obtained analytically. Having established explicit forms of the system response, numerical solutions from three computer codes are presented in comparative format.

  8. [Neural codes for perception].

    Science.gov (United States)

    Romo, R; Salinas, E; Hernández, A; Zainos, A; Lemus, L; de Lafuente, V; Luna, R

    This article describes experiments designed to show the neural codes associated with the perception and processing of tactile information. The results of these experiments have shown the neural activity correlated with tactile perception. The neurones of the primary somatosensory cortex (S1) represent the physical attributes of tactile perception. We found that these representations correlated with tactile perception. By means of intracortical microstimulation we demonstrated the causal relationship between S1 activity and tactile perception. In the motor areas of the frontal lobe is to be found the connection between sensorial and motor representation whilst decisions are being taken. S1 generates neural representations of the somatosensory stimuli which seen to be sufficient for tactile perception. These neural representations are subsequently processed by central areas to S1 and seem useful in perception, memory and decision making.

  9. Development of the DCHAIN-SP code for analyzing decay and build-up characteristics of spallation products

    Energy Technology Data Exchange (ETDEWEB)

    Takada, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kosako, Kazuaki

    1999-03-01

    For analyzing the decay and build-up characteristics of spallation products, the DCHAIN-SP code has been developed on the basis of the DCHAIN-2 code by revising the decay data and implementing the neutron cross section data. The decay data are newly processed from the data libraries of EAF 3.1, FENDL/D-1 and ENSDF. The neutron cross section data taken from FENDL/A-2 data library are also prepared to take account of the transmutation of nuclides by the neutron field at the produced position. The DCHAIN-SP code solves the time evolution of decay and build-up of nuclides in every decay chain by the Beteman method. The code can estimate the following physical quantities of produced nuclides: inventory, activity, decay heat by the emission of {alpha}, {beta} and {gamma}-rays, and {gamma}-ray energy spectrum, where the nuclide production rate estimated by the nucleon-meson transport code such as NMTC/JAERI97 is used as an input data. This paper describes about the function, the solution model and the database adopted in the code and explains how to use the code. (author)

  10. The Theoretical Basis of the Code 50 Nuclear Exchange Model

    Science.gov (United States)

    1971-09-01

    allocation. The various components of this part of Coce 50 are Code 50, the main executive program; GENX and DATAX, the input routines; STRIKE, a secondary...the data input subroutines, GENX and DATAX. These read in informa- tion and also compute some parameters such as the number of weapon and target...INPUT ( GENX , DATAX) 11 SINGLE SHOT KILL PROBABILITY CALCULATION (SSKCALC) ANOTHER YES CASE? A (EXEC) SET UP FOR NEXT NO STRIKE (EXEC) INTERIM PROCESSING

  11. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  12. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  13. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... of the theoretical backgrounds of each model and the method of analysis or assessment; (2) General... input and output files from a sample computer run; and reports on code verification, benchmarking, validation, and quality assurance procedures; (3) Detailed descriptions of the structure of computer codes...

  14. Algebraic geometric codes

    Science.gov (United States)

    Shahshahani, M.

    1991-01-01

    The performance characteristics are discussed of certain algebraic geometric codes. Algebraic geometric codes have good minimum distance properties. On many channels they outperform other comparable block codes; therefore, one would expect them eventually to replace some of the block codes used in communications systems. It is suggested that it is unlikely that they will become useful substitutes for the Reed-Solomon codes used by the Deep Space Network in the near future. However, they may be applicable to systems where the signal to noise ratio is sufficiently high so that block codes would be more suitable than convolutional or concatenated codes.

  15. GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data

    Energy Technology Data Exchange (ETDEWEB)

    Collin, Blaise Paul [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. • The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read

  16. Pressurizer Heater and Safety Valve Test for SPACE code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Hyuk; Youn, Bum Soo; Jun, Hwang Yong; Kim, Se Yun; Ha, Sang Jun [KEPCO Research Institute, Daejeon (Korea, Republic of)

    2011-05-15

    The Korea nuclear industry has been developing a thermal-hydraulic analysis code for safety analysis of PWR(pressurized water reactor). The new code is named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). In this paper, the pressurizer heater model and safety valve for SPACE code is tested. The SPACE code input for pressurizer is developed and simulations are performed. Calculations were performed with and without heater and safety valve model and results are compared to confirm effectiveness of heater and safety valve

  17. Monomial-like codes

    CERN Document Server

    Martinez-Moro, Edgar; Ozbudak, Ferruh; Szabo, Steve

    2010-01-01

    As a generalization of cyclic codes of length p^s over F_{p^a}, we study n-dimensional cyclic codes of length p^{s_1} X ... X p^{s_n} over F_{p^a} generated by a single "monomial". Namely, we study multi-variable cyclic codes of the form in F_{p^a}[x_1...x_n] / . We call such codes monomial-like codes. We show that these codes arise from the product of certain single variable codes and we determine their minimum Hamming distance. We determine the dual of monomial-like codes yielding a parity check matrix. We also present an alternative way of constructing a parity check matrix using the Hasse derivative. We study the weight hierarchy of certain monomial like codes. We simplify an expression that gives us the weight hierarchy of these codes.

  18. Assessment of the computer code COBRA/CFTL

    Energy Technology Data Exchange (ETDEWEB)

    Baxi, C. B.; Burhop, C. J.

    1981-07-01

    The COBRA/CFTL code has been developed by Oak Ridge National Laboratory (ORNL) for thermal-hydraulic analysis of simulated gas-cooled fast breeder reactor (GCFR) core assemblies to be tested in the core flow test loop (CFTL). The COBRA/CFTL code was obtained by modifying the General Atomic code COBRA*GCFR. This report discusses these modifications, compares the two code results for three cases which represent conditions from fully rough turbulent flow to laminar flow. Case 1 represented fully rough turbulent flow in the bundle. Cases 2 and 3 represented laminar and transition flow regimes. The required input for the COBRA/CFTL code, a sample problem input/output and the code listing are included in the Appendices.

  19. Simulation of high-energy radiation belt electron fluxes using NARMAX-VERB coupled codes.

    Science.gov (United States)

    Pakhotin, I P; Drozdov, A Y; Shprits, Y Y; Boynton, R J; Subbotin, D A; Balikhin, M A

    2014-10-01

    This study presents a fusion of data-driven and physics-driven methodologies of energetic electron flux forecasting in the outer radiation belt. Data-driven NARMAX (Nonlinear AutoRegressive Moving Averages with eXogenous inputs) model predictions for geosynchronous orbit fluxes have been used as an outer boundary condition to drive the physics-based Versatile Electron Radiation Belt (VERB) code, to simulate energetic electron fluxes in the outer radiation belt environment. The coupled system has been tested for three extended time periods totalling several weeks of observations. The time periods involved periods of quiet, moderate, and strong geomagnetic activity and captured a range of dynamics typical of the radiation belts. The model has successfully simulated energetic electron fluxes for various magnetospheric conditions. Physical mechanisms that may be responsible for the discrepancies between the model results and observations are discussed.

  20. HIBRA: A computer code for heavy ion binary reaction analysis employing ion track detectors

    Science.gov (United States)

    Jamil, Khalid; Ahmad, Siraj-ul-Islam; Manzoor, Shahid

    2016-01-01

    Collisions of heavy ions many times result in production of only two reaction products. Study of heavy ions using ion track detectors allows experimentalists to observe the track length in the plane of the detector, depth of the tracks in the volume of the detector and angles between the tracks on the detector surface, all known as track parameters. How to convert these into useful physics parameters such as masses, energies, momenta of the reaction products and the Q-values of the reaction? This paper describes the (a) model used to analyze binary reactions in terms of measured etched track parameters of the reaction products recorded in ion track detectors, and (b) the code developed for computing useful physics parameters for fast and accurate analysis of a large number of binary events. A computer code, HIBRA (Heavy Ion Binary Reaction Analysis) has been developed both in C++ and FORTRAN programming languages. It has been tested on the binary reactions from 12.5 MeV/u 84Kr ions incident upon U (natural) target deposited on mica ion track detector. The HIBRA code can be employed with any ion track detector for which range-velocity relation is available including the widely used CR-39 ion track detectors. This paper provides the source code of HIBRA in C++ language along with input and output data to test the program.

  1. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  2. On the Relationship Between Input and Interaction Psycholinguistic, Cognitive, and Ecological Perspectives in SLA

    Directory of Open Access Journals (Sweden)

    Parisa Daftarifard

    2010-09-01

    Full Text Available Input is one of the most important elements in the process of second language acquisition (SLA. As Gass (1997 points out, second language learning simply cannot take place without input of some sort. Since then, specific issues have been actively debated in SLA on the nature of input and input processing, such as the amount of input that is necessary for language acquisition, various attributes of input and how they may facilitate or hinder acquisition, and instructional method that may enhance input. In this paper, four hypotheses and paradigms of input processing have been described. It is delineated that although the three paradigms of triggering, input hypothesis, and interaction hypothesis have been widely used and accepted, they lack the ability to account for the dynamic nature of language. Affordance, on the other hand, can account for such a nature of language.
    Therefore, affordance replaces fixed-eye vision by mobile-eye vision; an active learner establishes relationships with and within the environment. The learner can directly perceive and act on the ambient language without having to route everything through a pre-existing mental apparatus of schemata and representation, while this is not true in the fixed-code theory. In the fixed-eye theory of communication it is assumed that ready-made messages are coded at one end, transmitted,
    and then decoded in identical form at the other end. We need in its place a constructivist theory of message construction and interpretation.

  3. FLOWTRAN-TF code description

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  4. FLOWTRAN-TF code description

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.P. (ed.)

    1991-09-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  5. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  6. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-07-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs.

  7. The effect of input perturbations on swimming performance

    Science.gov (United States)

    Lehn, Andrea M.; Thornycroft, Patrick J. M.; Lauder, George V.; Leftwich, Megan C.

    2014-11-01

    The influence of flexibility and fluid characteristics on the hydrodynamics of swimming has been investigated for a range of experimental systems. One investigative method is to use reduced-order physical models--pitching and heaving hydrofoils. Typically, a smooth, periodic, input signal is used to control foil motion in experiments that explore fundamental factors (aspect ratio, shape, etc.) in swimming performance. However, the significance of non-smooth input signals in undulating swimmers is non-trivial. Instead of varying external properties, we study the impact of perturbed input motions on swimming performance. A smooth sinusoid is overlaid with high frequency, low amplitude perturbations as the input signal for a heaving panel in a closed loop flow tank. Specifically, 1 cm heave amplitude base sinusoids are added to 0.1 cm heave perturbations with frequencies ranging from 0.5 to 13 Hz. Two thin foils with different stiffness are flapped with the combined input signals in addition to the individual high heave and low heave signals that were added to create the combined inputs. Results demonstrate that perturbations can increase thrust and that adding the perturbed signal to a base frequency alters wake structure.

  8. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  9. Analysis of MELCOR code structure

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Ha; Park, Sun Hee

    2000-04-01

    MELCOR executes in two parts. The first is a MELGEN program, in which most of the input is specified, processed, and checked. The second part of MELCOR is the MELCOR program itself, which advances the program through time based on the database generated by MELGEN and any additional MELCOR input. In particular, MELCOR execution involves two steps: (1) a setup mode in MEXSET, during which the database is read from the restart file and any additional input is processed, and (2) a run mode in MEXRUN, which advances the simulation through time, updating the time-dependent portion of the database each cycle. MELGEN and MELCOR share a structured and modular architecture that facilitates the incorporation of additional or altenative phenomenological modes. This structure consists of four primary levels: executive level, database manager routine level, package level, and utility level. MELCOR is composed of 24 different packages, each of which models a different portion of the accident phenomenology or program control. To identify the relation of the MELCOR subroutines with the packages, first two or three letters of the package's name are duplicated in the name of the subroutines. The same rule applies to the naming of the common block. Data flows and the specific subroutines in the MELGEN and MELCOR are analyzed by their functions according to the hierarchy of four levels for model improvement and replacement during the integral code development project.

  10. An Efficient Method for Verifying Gyrokinetic Microstability Codes

    Science.gov (United States)

    Bravenec, R.; Candy, J.; Dorland, W.; Holland, C.

    2009-11-01

    Benchmarks for gyrokinetic microstability codes can be developed through successful ``apples-to-apples'' comparisons among them. Unlike previous efforts, we perform the comparisons for actual discharges, rendering the verification efforts relevant to existing experiments and future devices (ITER). The process requires i) assembling the experimental analyses at multiple times, radii, discharges, and devices, ii) creating the input files ensuring that the input parameters are faithfully translated code-to-code, iii) running the codes, and iv) comparing the results, all in an organized fashion. The purpose of this work is to automate this process as much as possible: At present, a python routine is used to generate and organize GYRO input files from TRANSP or ONETWO analyses. Another routine translates the GYRO input files into GS2 input files. (Translation software for other codes has not yet been written.) Other python codes submit the multiple GYRO and GS2 jobs, organize the results, and collect them into a table suitable for plotting. (These separate python routines could easily be consolidated.) An example of the process -- a linear comparison between GYRO and GS2 for a DIII-D discharge at multiple radii -- will be presented.

  11. Software Abstractions and Methodologies for HPC Simulation Codes on Future Architectures

    Directory of Open Access Journals (Sweden)

    Anshu Dubey

    2014-07-01

    Full Text Available Simulations with multi-physics modeling have become crucial to many science and engineering fields, and multi-physics capable scientific software is as important to these fields as instruments and facilities are to experimental sciences. The current generation of mature multi-physics codes would have sustainably served their target communities with modest amount of ongoing investment for enhancing capabilities. However, the revolution occurring in the hardware architecture has made it necessary to tackle the parallelism and performance management in these codes at multiple levels. The requirements of various levels are often at cross-purposes with one another, and therefore hugely complicate the software design. All of these considerations make it essential to approach this challenge cooperatively as a community. We conducted a series of workshops under an NSF-SI2 conceptualization grant to get input from various stakeholders, and to identify broad approaches that might lead to a solution. In this position paper we detail the major concerns articulated by the application code developers, and emerging trends in utilization of programming abstractions that we found through these workshops.

  12. Physical Layer Network Coding for FSK Systems

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Krigslund, Rasmus; Popovski, Petar

    2009-01-01

    In this work we extend the existing concept of De- Noise and Forward (DNF) for bidirectional relaying to utilise non-coherent modulation schemes. This is done in order to avoid the requirement of phase tracking in coherent detection. As an example BFSK is considered, and through analysis the deci...

  13. Statistical physics, optimization and source coding

    Indian Academy of Sciences (India)

    rection of their bias and perform the resulting simplification of the factor graph. The steps from 1 to 5 are repeated until a full assignment is produced or until convergence is lost or a paramagnetic state is reached (all the local biases vanish), in which case the left subproblem is passed to some local search heuristics, like.

  14. The SHIELD11 Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W

    2005-02-02

    SHIELD11 is a computer code for performing shielding analyses around a high-energy electron accelerator. It makes use of simple analytic expressions for the production and attenuation of photons and neutrons resulting from electron beams striking thick targets, such as dumps, stoppers, collimators, and other beam devices. The formulae in SHIELD11 are somewhat unpretentious in that they are based on the extrapolation (scaling) of experimental data using rather simple physics ideas. Because these scaling methods have only been tested over a rather limited set of conditions--namely, 1-15 GeV electrons striking 10-20 radiation lengths of iron--a certain amount of care and judgment must be exercised whenever SHIELD11 is used. Nevertheless, for many years these scaling methods have been applied rather successfully to a large variety of problems at SLAC, as well as at other laboratories throughout the world, and the SHIELD11 code has been found to be a fast and convenient tool. In this paper we present, without extensive theoretical justification or experimental verification, the five-component model on which the SHIELD11 code is based. Our intent is to demonstrate how to use the code by means of a few simple examples. References are provided that are considered to be essential for a full understanding of the model. The code itself contains many comments to provide some guidance for the informed user, who may wish to improve on the model.

  15. Repositioning Recitation Input in College English Teaching

    Science.gov (United States)

    Xu, Qing

    2009-01-01

    This paper tries to discuss how recitation input helps overcome the negative influences on the basis of second language acquisition theory and confirms the important role that recitation input plays in improving college students' oral and written English.

  16. Modelling of CTIX with the DMC Code

    Science.gov (United States)

    Horton, R. D.; Hwang, D. Q.; McLean, H. S.; Terry, S. D.; Evans, R. W.

    1997-11-01

    The DMC (Davis Magnetohydrodynamic Code) is an arbitrary Lagrangian-Eulerian (ALE) code designed for the modelling of moving plasmas in axially symmetric geometry. DMC is a further development of the Trac-II code, with modifications made for interactive graphical operation on modern personal computers. Results from the DMC code will be presented comparing experimental data from the CTIX compact-torus accelerator with performance predicted by DMC. Simulated multichannel optical detector data produced by DMC will be used as an input to a maximum-entropy tomographic reconstruction algorithm, allowing the performance of the algorithm to be tested with known, realistic plasma profiles. The finite-element algorithms used in DMC will be described in detail, and planned extensions to the capabilities of DMC will be presented.

  17. Input and output constraints affecting irrigation development

    Science.gov (United States)

    Schramm, G.

    1981-05-01

    In many of the developing countries the expansion of irrigated agriculture is used as a major development tool for bringing about increases in agricultural output, rural economic growth and income distribution. Apart from constraints imposed by water availability, the major limitations considered to any acceleration of such programs are usually thought to be those of costs and financial resources. However, as is shown on the basis of empirical data drawn from Mexico, in reality the feasibility and effectiveness of such development programs is even more constrained by the lack of specialized physical and human factors on the input and market limitations on the output side. On the input side, the limited availability of complementary factors such as, for example, truly functioning credit systems for small-scale farmers or effective agricultural extension services impose long-term constraints on development. On the output side the limited availability, high risk, and relatively slow growth of markets for high-value crops sharply reduce the usually hoped-for and projected profitable crop mix that would warrant the frequently high costs of irrigation investments. Three conclusions are drawn: (1) Factors in limited supply have to be shadow-priced to reflect their high opportunity costs in alternative uses. (2) Re-allocation of financial resources from immediate construction of projects to longer-term increase in the supply of scarce, highly-trained manpower resources are necessary in order to optimize development over time. (3) Inclusion of high-value, high-income producing crops in the benefit-cost analysis of new projects is inappropriate if these crops could potentially be grown in already existing projects.

  18. Nuclear physics inputs needed for geo-neutrino studies

    Energy Technology Data Exchange (ETDEWEB)

    Bellini, G [Dipartimento di Fisica, Universita degli Studi di Milano, I-20100 Milano (Italy); Fiorentini, G [Dipartimento di Fisica, Universita degli Studi di Ferrara, I-44100 Ferrara (Italy); Ianni, A [INFN, Laboratori Nazionali del Gran Sasso, I-67010 Assergi (Italy); Lissia, M [Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, I-09042 Monserrato (Italy); Mantovani, F [Istituto Nazionale di Fisica Nucleare, Sezione di Ferrara, I-44100 Ferrara (Italy); Smirnov, O [Joint Institute for Nuclear Research, 141980, Dubna, Moscow region (Russian Federation)], E-mail: fabio.mantovani@unisi.it

    2008-07-15

    Geo-neutrino studies are based on theoretical estimates of geo-neutrino spectra. We propose a method for a direct measurement of the energy distribution of antineutrinos from decays of long-lived radioactive isotopes.

  19. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  20. 7 CFR 3430.15 - Stakeholder input.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Stakeholder input. 3430.15 Section 3430.15... Stakeholder input. Section 103(c)(2) of the Agricultural Research, Extension, and Education Reform Act of 1998 (AREERA) (7 U.S.C. 7613(c)(2)) requires the Secretary to solicit and consider input on each program RFA...

  1. Rapid Airplane Parametric Input Design (RAPID)

    Science.gov (United States)

    Smith, Robert E.

    1995-01-01

    RAPID is a methodology and software system to define a class of airplane configurations and directly evaluate surface grids, volume grids, and grid sensitivity on and about the configurations. A distinguishing characteristic which separates RAPID from other airplane surface modellers is that the output grids and grid sensitivity are directly applicable in CFD analysis. A small set of design parameters and grid control parameters govern the process which is incorporated into interactive software for 'real time' visual analysis and into batch software for the application of optimization technology. The computed surface grids and volume grids are suitable for a wide range of Computational Fluid Dynamics (CFD) simulation. The general airplane configuration has wing, fuselage, horizontal tail, and vertical tail components. The double-delta wing and tail components are manifested by solving a fourth order partial differential equation (PDE) subject to Dirichlet and Neumann boundary conditions. The design parameters are incorporated into the boundary conditions and therefore govern the shapes of the surfaces. The PDE solution yields a smooth transition between boundaries. Surface grids suitable for CFD calculation are created by establishing an H-type topology about the configuration and incorporating grid spacing functions in the PDE equation for the lifting components and the fuselage definition equations. User specified grid parameters govern the location and degree of grid concentration. A two-block volume grid about a configuration is calculated using the Control Point Form (CPF) technique. The interactive software, which runs on Silicon Graphics IRIS workstations, allows design parameters to be continuously varied and the resulting surface grid to be observed in real time. The batch software computes both the surface and volume grids and also computes the sensitivity of the output grid with respect to the input design parameters by applying the precompiler tool

  2. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  3. Technology Infusion of CodeSonar into the Space Network Ground Segment

    Science.gov (United States)

    Benson, Markland J.

    2009-01-01

    This slide presentation reviews the applicability of CodeSonar to the Space Network software. CodeSonar is a commercial off the shelf system that analyzes programs written in C, C++ or Ada for defects in the code. Software engineers use CodeSonar results as an input to the existing source code inspection process. The study is focused on large scale software developed using formal processes. The systems studied are mission critical in nature but some use commodity computer systems.

  4. Development of a code clone search tool for open source repositories

    OpenAIRE

    Xia, Pei; Yoshida, Norihiro; Manabe, Yuki; Inoue, Katsuro

    2011-01-01

    Finding code clones in the open source systems is one of important and demanding features for efficient and safe reuse of existing open source software. In this paper, we propose a novel search model, open code clone search, to explore code clones in open source repositories on the Internet. Based on this search model, we have designed and implemented a prototype system named Open CCFinder. This system takes a query code fragment as its input, and returns the code fragments containing the cod...

  5. Thermal Calculations Of Input Coupler For Erl Injector

    CERN Document Server

    Sobenin, N P; Bogdanovich, B Yu; Kaminsky, V I; Krasnov, A A; Lalayan, M V; Veshcherevich, V G; Zavadtsev, A A; Zavadtsev, D A

    2004-01-01

    The thermal calculation results of input coupler for ERL injector cavities are presented [1]. A twin coaxial coupler of TTF-3 type was chosen for 2×75 kW RF power transfer. TTF-3 coupler was intended for high pulse and not high average power transmission, so there were revisings proposed in its design. New coupler configuration provides thermal leakage not more than 0.2 W at temperature 2.0K, 2 W at the temperature 4,0K and 50 W at temperature 80K. Construction revising was made at "cold" and "warm" bellows. In particularly, bellows separating was proposed to install additional heat sink. Coupler configuration with "warm" window of choke type was examined. It provides mechanical uncoupling of input waveguide and ceramic insulator. Electrodynamics simulations were carried out by MicroWaveStudio and HFSS codes, thermal analysis was maid using ANSYS.

  6. TIPONLINE Code Table

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Coded items are entered in the tiponline data entry program. The codes and their explanations are necessary in order to use the data

  7. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  8. Adaptation to sensory input tunes visual cortex to criticality

    Science.gov (United States)

    Shew, Woodrow L.; Clawson, Wesley P.; Pobst, Jeff; Karimipanah, Yahya; Wright, Nathaniel C.; Wessel, Ralf

    2015-08-01

    A long-standing hypothesis at the interface of physics and neuroscience is that neural networks self-organize to the critical point of a phase transition, thereby optimizing aspects of sensory information processing. This idea is partially supported by strong evidence for critical dynamics observed in the cerebral cortex, but the impact of sensory input on these dynamics is largely unknown. Thus, the foundations of this hypothesis--the self-organization process and how it manifests during strong sensory input--remain unstudied experimentally. Here we show in visual cortex and in a computational model that strong sensory input initially elicits cortical network dynamics that are not critical, but adaptive changes in the network rapidly tune the system to criticality. This conclusion is based on observations of multifaceted scaling laws predicted to occur at criticality. Our findings establish sensory adaptation as a self-organizing mechanism that maintains criticality in visual cortex during sensory information processing.

  9. VOLVOF: An update of the CFD code, SOLA-VOF

    Energy Technology Data Exchange (ETDEWEB)

    Park, J.E.

    1999-12-14

    The SOLA-VOF code developed by the T-3 (Theoretical Physics, Fluids) group at the Los Alamos National Laboratory (LANL) has been extensively modified at the Oak Ridge National Laboratory (ORNL). The modified and improved version has been dubbed ''VOLVOF,'' to acknowledge the state of Tennessee, the Volunteer state and home of ORNL. Modifications include generalization of boundary conditions, additional flexibility in setting up problems, addition of a problem interruption and restart capability, segregation of graphics functions to allow utilization of modern commercial graphics programs, and addition of time and date stamps to output files. Also, the pressure iteration has been restructured to exploit the much greater system memory available on modern workstations and personal computers. A solution monitoring capability has been added to utilize the multi-tasking capability of modern computer operating systems. These changes are documented in the following report. NAMELIST input variables are defined, and input files and the resulting output are given for two test problems. Modification and documentation of a working technical computer program is almost never complete. This is certainly true for the present effort. However, the impending retirement of the writer dictates that the current configuration and capability be reported.

  10. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  11. Linear Error Correcting Codes with Anytime Reliability

    CERN Document Server

    Sukhavasi, Ravi Teja

    2011-01-01

    We consider rate $R=\\frac{k}{n}$ causal linear codes that map a sequence of $k$-dimensional binary vectors $\\{b_t\\}_{t=0}^\\infty$ to a sequence of $n$-dimensional binary vectors $\\{c_t\\}_{t=0}^\\infty$, such that each $c_t$ is a function of $\\{b_\\tau\\}_{\\tau=0}^t$. Such a code is called anytime reliable, for a particular binary-input memoryless channel, if at each time instant $t$, and for all delays $d\\geq d_o$, the probability of error $P\\bra{\\hat{b}_{t-d|t}\

  12. Physics Verification Overview

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-12

    The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.

  13. Physical physics

    CERN Multimedia

    Schulman, Mark

    2006-01-01

    "Protons, electrons, positrons, quarks, gluons, muons, shmuons! I should have paid better attention to my high scholl physics teacher. If I had, maybe I could have understood even a fration of what Israeli particle physicist Giora Mikenberg was talking about when explaining his work on the world's largest science experiment." (2 pages)

  14. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  15. Development of the SPACE code for nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sang Jun [Korea Electric Power Corporation Research Institute, Daejeon (Korea, Republic of); Park, Chan Eok [KEPCO Engineering and Construction Company Inc, Daejeon (Korea, Republic of); Kim, Kyung Doo [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Ban, Chang Hwan [Korea Nuclear Fuel, Daejeon (Korea, Republic of)

    2011-02-15

    The Korean nuclear industry is developing a thermal-hydraulic analysis code for safety analysis of pressurized water reactors (PWRs). The new code is called the Safety and Performance Analysis Code for Nuclear Power Plants (SPACE). The SPACE code adopts advanced physical modeling of two-phase flows, mainly two-fluid three-field models which comprise gas, continuous liquid, and droplet fields and has the capability to simulate 3D effects by the use of structured and/or nonstructured meshes. The programming language for the SPACE code is C++ for object-oriented code architecture. The SPACE code will replace outdated vendor supplied codes and will be used for the safety analysis of operating PWRs and the design of advanced reactors. This paper describes the overall features of the SPACE code and shows the code assessment results for several conceptual and separate effect test problems

  16. ARC Code TI: ROC Curve Code Augmentation

    Data.gov (United States)

    National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of ROC...

  17. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  18. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  19. Scalable L-infinite coding of meshes.

    Science.gov (United States)

    Munteanu, Adrian; Cernea, Dan C; Alecu, Alin; Cornelis, Jan; Schelkens, Peter

    2010-01-01

    The paper investigates the novel concept of local-error control in mesh geometry encoding. In contrast to traditional mesh-coding systems that use the mean-square error as target distortion metric, this paper proposes a new L-infinite mesh-coding approach, for which the target distortion metric is the L-infinite distortion. In this context, a novel wavelet-based L-infinite-constrained coding approach for meshes is proposed, which ensures that the maximum error between the vertex positions in the original and decoded meshes is lower than a given upper bound. Furthermore, the proposed system achieves scalability in L-infinite sense, that is, any decoding of the input stream will correspond to a perfectly predictable L-infinite distortion upper bound. An instantiation of the proposed L-infinite-coding approach is demonstrated for MESHGRID, which is a scalable 3D object encoding system, part of MPEG-4 AFX. In this context, the advantages of scalable L-infinite coding over L-2-oriented coding are experimentally demonstrated. One concludes that the proposed L-infinite mesh-coding approach guarantees an upper bound on the local error in the decoded mesh, it enables a fast real-time implementation of the rate allocation, and it preserves all the scalability features and animation capabilities of the employed scalable mesh codec.

  20. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.

  1. Input modelling of ASSERT-PV V2R8M1 for RUFIC fuel bundle

    Energy Technology Data Exchange (ETDEWEB)

    Park, Joo Hwan; Suk, Ho Chun

    2001-02-01

    This report describes the input modelling for subchannel analysis of CANFLEX-RU (RUFIC) fuel bundle which has been developed for an advanced fuel bundle of CANDU-6 reactor, using ASSERT-PV V2R8M1 code. Execution file of ASSERT-PV V2R8M1 code was recently transferred from AECL under JRDC agreement between KAERI and AECL. SSERT-PV V2R8M1 which is quite different from COBRA-IV-i code has been developed for thermalhydraulic analysis of CANDU-6 fuel channel by subchannel analysis method and updated so that 43-element CANDU fuel geometry can be applied. Hence, ASSERT code can be applied to the subchannel analysis of RUFIC fuel bundle. The present report was prepared for ASSERT input modelling of RUFIC fuel bundle. Since the ASSERT results highly depend on user's input modelling, the calculation results may be quite different among the user's input models. The objective of the present report is the preparation of detail description of the background information for input data and gives credibility of the calculation results.

  2. Expansion coding: Achieving the capacity of an AEN channel

    CERN Document Server

    Koyluoglu, O Ozan; Si, Hongbo; Vishwanath, Sriram

    2012-01-01

    A general method of coding over expansions is proposed, which allows one to reduce the highly non-trivial problem of coding over continuous channels to a much simpler discrete ones. More specifically, the focus is on the additive exponential noise (AEN) channel, for which the (binary) expansion of the (exponential) noise random variable is considered. It is shown that each of the random variables in the expansion corresponds to independent Bernoulli random variables. Thus, each of the expansion levels (of the underlying channel) corresponds to a binary symmetric channel (BSC), and the coding problem is reduced to coding over these parallel channels while satisfying the channel input constraint. This optimization formulation is stated as the achievable rate result, for which a specific choice of input distribution is shown to achieve a rate which is arbitrarily close to the channel capacity in the high SNR regime. Remarkably, the scheme allows for low-complexity capacity-achieving codes for AEN channels, using...

  3. Development of an Input Model to MELCOR 1.8.5 for the Oskarshamn 3 BWR

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Lars [Lentek, Nykoeping (Sweden)

    2006-05-15

    An input model has been prepared to the code MELCOR 1.8.5 for the Swedish Oskarshamn 3 Boiling Water Reactor (O3). This report describes the modelling work and the various files which comprise the input deck. Input data are mainly based on original drawings and system descriptions made available by courtesy of OKG AB. Comparison and check of some primary system data were made against an O3 input file to the SCDAP/RELAP5 code that was used in the SARA project. Useful information was also obtained from the FSAR (Final Safety Analysis Report) for O3 and the SKI report '2003 Stoerningshandboken BWR'. The input models the O3 reactor at its current state with the operating power of 3300 MW{sub th}. One aim with this work is that the MELCOR input could also be used for power upgrading studies. All fuel assemblies are thus assumed to consist of the new Westinghouse-Atom's SVEA-96 Optima2 fuel. MELCOR is a severe accident code developed by Sandia National Laboratory under contract from the U.S. Nuclear Regulatory Commission (NRC). MELCOR is a successor to STCP (Source Term Code Package) and has thus a long evolutionary history. The input described here is adapted to the latest version 1.8.5 available when the work began. It was released the year 2000, but a new version 1.8.6 was distributed recently. Conversion to the new version is recommended. (During the writing of this report still another code version, MELCOR 2.0, has been announced to be released within short.) In version 1.8.5 there is an option to describe the accident progression in the lower plenum and the melt-through of the reactor vessel bottom in more detail by use of the Bottom Head (BH) package developed by Oak Ridge National Laboratory especially for BWRs. This is in addition to the ordinary MELCOR COR package. Since problems arose running with the BH input two versions of the O3 input deck were produced, a NONBH and a BH deck. The BH package is no longer a separate package in the new 1

  4. The series product for gaussian quantum input processes

    Science.gov (United States)

    Gough, John E.; James, Matthew R.

    2017-02-01

    We present a theory for connecting quantum Markov components into a network with quantum input processes in a Gaussian state (including thermal and squeezed). One would expect on physical grounds that the connection rules should be independent of the state of the input to the network. To compute statistical properties, we use a version of Wicks' theorem involving fictitious vacuum fields (Fock space based representation of the fields) and while this aids computation, and gives a rigorous formulation, the various representations need not be unitarily equivalent. In particular, a naive application of the connection rules would lead to the wrong answer. We establish the correct interconnection rules, and show that while the quantum stochastic differential equations of motion display explicitly the covariances (thermal and squeezing parameters) of the Gaussian input fields we introduce the Wick-Stratonovich form which leads to a way of writing these equations that does not depend on these covariances and so corresponds to the universal equations written in terms of formal quantum input processes. We show that a wholly consistent theory of quantum open systems in series can be developed in this way, and as required physically, is universal and in particular representation-free.

  5. Using facilitators in mock codes: recasting the parts for success.

    Science.gov (United States)

    Cuda, S; Doerr, D; Gonzalez, M

    1999-01-01

    Members of the CHRISTUS Santa Rosa Children's Hospital staff development committee identified a need for a mock code program which would address a range of learning needs for nurses and other caregivers with varying levels of knowledge, skills, and experience. We implemented a mock code program using experienced caregivers, usually emergency room and pediatric intensive care RNs and respiratory therapists to serve as facilitators to code participants during the mock code drills. Facilitators have dual roles of teaching and guiding the code participant as well as evaluating performance. Code participants and facilitators benefit from the design of this program. Debriefing session input and written program evaluations show that code participants value the opportunity to practice their skills in a nonthreatening situation in which they receive immediate feedback as needed. Facilitators learn to teach and coach and strengthen their own code knowledge and skills at the same time. This mock code program serves as a unique way to include novice and experienced nurses in mock codes together. The knowledge, skills, and confidence of the code participants and the facilitators have matured. The design of the program allows for immediate teaching/learning where needed, as well as appropriate evaluation. This program develops stronger, calmer, more efficient, and more confident nurses during codes. Practice and equipment changes can be based on findings from the mock codes. The program is invaluable to patients, staff, and hospital.

  6. Substitutions between Water and other Agricultural Inputs - An Empirical Analysis

    Science.gov (United States)

    Cai, X.; You, J.

    2005-12-01

    Increasing concerns about water availability, environmental water requirement and water quality have led to an increased importance of quantitative assessments of the substitution between water and other agricultural inputs at the margin for agricultural and environmental policy analysis. This paper explores the potential substitutions between water and other agricultural inputs in irrigated agriculture through an empirical study. The study include (1) an analysis based on a crop production function for net substitution at the crop field and farm levels; and (2) a numerical study for gross substitution in the context of water allocation in river basins thorough an integrated hydrologic-economic river basin model. Along with the empirical analysis and numerical illustrations, we discuss several theoretical issues relevant to substitutions between water and other inputs, such as (1) selection of indicators of elasticity of substitution, depending on farmers' concerns on yield, production, or profit; (2) appropriateness of net or gross substitution analysis, which is relevant to the spatial scale of the analysis (field, district or region), as well as farmers' concerns; and (3) output impact of substitutions. Water is both a natural resource and an economic input, and the constraints on water include those from both physical and socio-economic aspects. Therefore, the output impact of the substitution between water and other inputs should be extended from a pure economic concept to the context of integrated hydrologic-economic systems.

  7. CBM first-level event selector input interface

    Energy Technology Data Exchange (ETDEWEB)

    Hutter, Dirk [Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt (Germany); Collaboration: CBM-Collaboration

    2016-07-01

    The CBM First-level Event Selector (FLES) is the central event selection system of the upcoming CBM experiment at FAIR. Designed as a high-performance computing cluster, its task is an online analysis of the physics data at a total data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows to perform this task very efficiently. The FLES Input Interface defines the linkage between FEE and FLES data transport framework. Utilizing a custom FPGA board, it receives data via optical links, prepares them for subsequent timeslice building, and transfers the data via DMA to the PC's memory. An accompanying HDL module implements the front-end logic interface and FLES link protocol in the front-end FPGAs. Prototypes of all Input Interface components have been implemented and integrated into the FLES framework. In contrast to earlier prototypes, which included components to work without a FPGA layer between FLES and FEE, the structure matches the foreseen final setup. This allows the implementation and evaluation of the final CBM read-out chain. An overview of the FLES Input Interface as well as studies on system integration and system start-up are presented.

  8. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    , Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  9. Secure Programming Cookbook for C and C++ Recipes for Cryptography, Authentication, Input Validation & More

    CERN Document Server

    Viega, John

    2009-01-01

    Secure Programming Cookbook for C and C++ is an important new resource for developers serious about writing secure code for Unix® (including Linux®) and Windows® environments. This essential code companion covers a wide range of topics, including safe initialization, access control, input validation, symmetric and public key cryptography, cryptographic hashes and MACs, authentication and key exchange, PKI, random numbers, and anti-tampering.

  10. The effect of cutting conditions on power inputs when machining

    Science.gov (United States)

    Petrushin, S. I.; Gruby, S. V.; Nosirsoda, Sh C.

    2016-08-01

    Any technological process involving modification of material properties or product form necessitates consumption of a certain power amount. When developing new technologies one should take into account the benefits of their implementation vs. arising power inputs. It is revealed that procedures of edge cutting machining are the most energy-efficient amongst the present day forming procedures such as physical and technical methods including electrochemical, electroerosion, ultrasound, and laser processing, rapid prototyping technologies etc, such as physical and technical methods including electrochemical, electroerosion, ultrasound, and laser processing, rapid prototyping technologies etc. An expanded formula for calculation of power inputs is deduced, which takes into consideration the mode of cutting together with the tip radius, the form of the replaceable multifaceted insert and its wear. Having taken as an example cutting of graphite iron by the assembled cutting tools with replaceable multifaceted inserts the authors point at better power efficiency of high feeding cutting in comparison with high-speed cutting.

  11. Noisy Network Coding

    CERN Document Server

    Lim, Sung Hoon; Gamal, Abbas El; Chung, Sae-Young

    2010-01-01

    A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward sch...

  12. Ionic balances of forest soils reciprocally transplanted among sites with varying pollution inputs.

    NARCIS (Netherlands)

    Raubuch, M.; Beese, F.; Bolger, T.; Anderson, J.M.E.; Berg, M.P.; Couteaux, M.-M.; Henderson, R.; Ineson, P.; McCarthy, F.; Palka, L.; Splatt, P.; Verhoef, H.A.; Willison, T.

    1998-01-01

    Forest ecosystems are currently being exposed to changes in chemical inputs and it is suggested that physical climate is also changing. A novel approach has been used to study the effects of ionic inputs and climatic conditions on forest soils by reciprocally exchanging lysimeters containing

  13. Code Properties from Holographic Geometries

    Directory of Open Access Journals (Sweden)

    Fernando Pastawski

    2017-05-01

    Full Text Available Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015 163.JHEPFG1029-847910.1007/JHEP04(2015163] proposed a highly illuminating connection between the AdS/CFT holographic correspondence and operator algebra quantum error correction (OAQEC. Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  14. Code Properties from Holographic Geometries

    Science.gov (United States)

    Pastawski, Fernando; Preskill, John

    2017-04-01

    Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015) 163., 10.1007/JHEP04(2015)163] proposed a highly illuminating connection between the AdS /CFT holographic correspondence and operator algebra quantum error correction (OAQEC). Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  15. Dependency graph for code analysis on emerging architectures

    Energy Technology Data Exchange (ETDEWEB)

    Shashkov, Mikhail Jurievich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lipnikov, Konstantin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-08

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  16. Advanced Code for Photocathode Design

    Energy Technology Data Exchange (ETDEWEB)

    Ives, Robert Lawrence [Calabazas Creek Research, Inc., San Mateo, CA (United States); Jensen, Kevin [Naval Research Lab. (NRL), Washington, DC (United States); Montgomery, Eric [Univ. of Maryland, College Park, MD (United States); Bui, Thuc [Calabazas Creek Research, Inc., San Mateo, CA (United States)

    2015-12-15

    The Phase I activity demonstrated that PhotoQE could be upgraded and modified to allow input using a graphical user interface. Specific calls to platform-dependent (e.g. IMSL) function calls were removed, and Fortran77 components were rewritten for Fortran95 compliance. The subroutines, specifically the common block structures and shared data parameters, were reworked to allow the GUI to update material parameter data, and the system was targeted for desktop personal computer operation. The new structures overcomes the previous rigid and unmodifiable library structures by implementing new, materials library data sets and repositioning the library values to external files. Material data may originate from published literature or experimental measurements. Further optimization and restructuring would allow custom and specific emission models for beam codes that rely on parameterized photoemission algorithms. These would be based on simplified and parametric representations updated and extended from previous versions (e.g., Modified Fowler-Dubridge, Modified Three-Step, etc.).

  17. Benchmark studies of BOUT++ code and TPSMBI code on neutral transport during SMBI

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Y.H. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); University of Science and Technology of China, Hefei 230026 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Wang, Z.H., E-mail: zhwang@swip.ac.cn [Southwestern Institute of Physics, Chengdu 610041 (China); Guo, W., E-mail: wfguo@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China); Ren, Q.L. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Sun, A.P.; Xu, M.; Wang, A.K. [Southwestern Institute of Physics, Chengdu 610041 (China); Xiang, N. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031 (China); Center for Magnetic Fusion Theory, Chinese Academy of Sciences, Hefei 230031 (China)

    2017-06-09

    SMBI (supersonic molecule beam injection) plays an important role in tokamak plasma fuelling, density control and ELM mitigation in magnetic confinement plasma physics, which has been widely used in many tokamaks. The trans-neut module of BOUT++ code is the only large-scale parallel 3D fluid code used to simulate the SMBI fueling process, while the TPSMBI (transport of supersonic molecule beam injection) code is a recent developed 1D fluid code of SMBI. In order to find a method to increase SMBI fueling efficiency in H-mode plasma, especially for ITER, it is significant to first verify the codes. The benchmark study between the trans-neut module of BOUT++ code and the TPSMBI code on radial transport dynamics of neutral during SMBI has been first successfully achieved in both slab and cylindrical coordinates. The simulation results from the trans-neut module of BOUT++ code and TPSMBI code are consistent very well with each other. Different upwind schemes have been compared to deal with the sharp gradient front region during the inward propagation of SMBI for the code stability. The influence of the WENO3 (weighted essentially non-oscillatory) and the third order upwind schemes on the benchmark results has also been discussed. - Highlights: • A 1D model of SMBI has developed. • Benchmarks of BOUT++ and TPSMBI codes have first been finished. • The influence of the WENO3 and the third order upwind schemes on the benchmark results has also been discussed.

  18. Input Method "Five Strokes": Advantages and Problems

    Directory of Open Access Journals (Sweden)

    Mateja PETROVČIČ

    2014-03-01

    Since the Five Stroke input method is easily accessible, simple to master and is not pronunciation-based, we would expect that the students will use it to input unknown characters. The survey comprises students of Japanology and Sinology at Department of Asian and African Studies, takes in consideration the grade of the respondent and therefore his/her knowledge of characters. This paper also discusses the impact of typeface to the accuracy of the input.

  19. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author).

  20. Object-oriented Development of an All-electron Gaussian Basis DFT Code for Periodic Systems

    Science.gov (United States)

    Alford, John

    2005-03-01

    We report on the construction of an all-electron Gaussian-basis DFT code for systems periodic in one, two, and three dimensions. This is in part a reimplementation of algorithms in the serial code, GTOFF, which has been successfully applied to the study of crystalline solids, surfaces, and ultra-thin films. The current development is being carried out in an object-oriented parallel framework using C++ and MPI. Some rather special aspects of this code are the use of density fitting methodologies and the implementation of a generalized Ewald technique to do lattice summations of Coulomb integrals, which is typically more accurate than multipole methods. Important modules that have already been created will be described, for example, a flexible input parser and storage class that can parse and store generically tagged data (e.g. XML), an easy to use processor communication mechanism, and the integrals package. Though C++ is generally inferior to F77 in terms of optimization, we show that careful redesigning has allowed us to make up the run-time performance difference in the new code. Timing comparisons and scalability features will be presented. The purpose of this reconstruction is to facilitate the inclusion of new physics. Our goal is to study orbital currents using modified gaussian bases and external magnetic field effects in the weak and ultra-strong ( ˜10^5 T) field regimes. This work is supported by NSF-ITR DMR-0218957.

  1. ITS version 5.0 :the integrated TIGER series of coupled electron/Photon monte carlo transport codes with CAD geometry.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2005-09-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  2. Input characterization of a shock test strructure.

    Energy Technology Data Exchange (ETDEWEB)

    Hylok, J. E. (Jeffrey E.); Groethe, M. A.; Maupin, R. D. (Ryan D.)

    2004-01-01

    Often in experimental work, measuring input forces and pressures is a difficult and sometimes impossible task. For one particular shock test article, its input sensitivity required a detailed measurement of the pressure input. This paper discusses the use of a surrogate mass mock test article to measure spatial and temporal variations of the shock input within and between experiments. Also discussed will be the challenges and solutions in making some of the high speed transient measurements. The current input characterization work appears as part of the second phase in an extensive model validation project. During the first phase, the system under analysis displayed sensitivities to the shock input's qualitative and quantitative (magnitude) characteristics. However, multiple shortcomings existed in the characterization of the input. First, the experimental measurements of the input were made on a significantly simplified structure only, and the spatial fidelity of the measurements was minimal. Second, the sensors used for the pressure measurement contained known errors that could not be fully quantified. Finally, the measurements examined only one input pressure path (from contact with the energetic material). Airblast levels from the energetic materials were unknown. The result was a large discrepancy between the energy content in the analysis and experiments.

  3. Low Complexity List Decoding for Polar Codes with Multiple CRC Codes

    Directory of Open Access Journals (Sweden)

    Jong-Hwan Kim

    2017-04-01

    Full Text Available Polar codes are the first family of error correcting codes that provably achieve the capacity of symmetric binary-input discrete memoryless channels with low complexity. Since the development of polar codes, there have been many studies to improve their finite-length performance. As a result, polar codes are now adopted as a channel code for the control channel of 5G new radio of the 3rd generation partnership project. However, the decoder implementation is one of the big practical problems and low complexity decoding has been studied. This paper addresses a low complexity successive cancellation list decoding for polar codes utilizing multiple cyclic redundancy check (CRC codes. While some research uses multiple CRC codes to reduce memory and time complexity, we consider the operational complexity of decoding, and reduce it by optimizing CRC positions in combination with a modified decoding operation. Resultingly, the proposed scheme obtains not only complexity reduction from early stopping of decoding, but also additional reduction from the reduced number of decoding paths.

  4. Prospective coding in event representation.

    Science.gov (United States)

    Schütz-Bosbach, Simone; Prinz, Wolfgang

    2007-06-01

    A perceived event such as a visual stimulus in the external world and a to-be-produced event such as an intentional action are subserved by event representations. Event representations do not only contain information about present states but also about past and future states. Here we focus on the role of representing future states in event perception and generation (i.e., prospective coding). Relevant theoretical issues and paradigms are discussed. We suggest that the predictive power of the motor system may be exploited for prospective coding not only in producing but also in perceiving events. Predicting is more advantageous than simply reacting. Perceptual prediction allows us to select appropriate responses ahead of the realization of an (anticipated) event and therefore, it is indispensable to flexibly and timely adapt to new situations and thus, successfully interact with our physical and social environment.

  5. (U) Ristra Next Generation Code Report

    Energy Technology Data Exchange (ETDEWEB)

    Hungerford, Aimee L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Daniel, David John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-22

    LANL’s Weapons Physics management (ADX) and ASC program office have defined a strategy for exascale-class application codes that follows two supportive, and mutually risk-mitigating paths: evolution for established codes (with a strong pedigree within the user community) based upon existing programming paradigms (MPI+X); and Ristra (formerly known as NGC), a high-risk/high-reward push for a next-generation multi-physics, multi-scale simulation toolkit based on emerging advanced programming systems (with an initial focus on data-flow task-based models exemplified by Legion [5]). Development along these paths is supported by the ATDM, IC, and CSSE elements of the ASC program, with the resulting codes forming a common ecosystem, and with algorithm and code exchange between them anticipated. Furthermore, solution of some of the more challenging problems of the future will require a federation of codes working together, using established-pedigree codes in partnership with new capabilities as they come on line. The role of Ristra as the high-risk/high-reward path for LANL’s codes is fully consistent with its role in the Advanced Technology Development and Mitigation (ATDM) sub-program of ASC (see Appendix C), in particular its emphasis on evolving ASC capabilities through novel programming models and data management technologies.

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Coding stimulus amplitude by correlated neural activity.

    Science.gov (United States)

    Metzen, Michael G; Ávila-Åkerberg, Oscar; Chacron, Maurice J

    2015-04-01

    While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.

  8. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  9. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded......We welcome Tanya Stivers’s discussion (Stivers, 2015/this issue) of coding social interaction and find that her descriptions of the processes of coding open up important avenues for discussion, among other things of the precise ad hoc considerations that researchers need to bear in mind, both when....... Instead we propose that the promise of coding-based research lies in its ability to open up new qualitative questions....

  10. Overview of Code Verification

    Science.gov (United States)

    1983-01-01

    The verified code for the SIFT Executive is not the code that executes on the SIFT system as delivered. The running versions of the SIFT Executive contain optimizations and special code relating to the messy interface to the hardware broadcast interface and to packing of data to conserve space in the store of the BDX930 processors. The running code was in fact developed prior to and without consideration of any mechanical verification. This was regarded as necessary experimentation with the SIFT hardware and special purpose Pascal compiler. The Pascal code sections cover: the selection of a schedule from the global executive broadcast, scheduling, dispatching, three way voting, and error reporting actions of the SIFT Executive. Not included in these sections of Pascal code are: the global executive, five way voting, clock synchronization, interactive consistency, low level broadcasting, and program loading, initialization, and schedule construction.

  11. Toward a unified theory of efficient, predictive, and sparse coding.

    Science.gov (United States)

    Chalk, Matthew; Marre, Olivier; Tkačik, Gašper

    2018-01-02

    A central goal in theoretical neuroscience is to predict the response properties of sensory neurons from first principles. To this end, "efficient coding" posits that sensory neurons encode maximal information about their inputs given internal constraints. There exist, however, many variants of efficient coding (e.g., redundancy reduction, different formulations of predictive coding, robust coding, sparse coding, etc.), differing in their regimes of applicability, in the relevance of signals to be encoded, and in the choice of constraints. It is unclear how these types of efficient coding relate or what is expected when different coding objectives are combined. Here we present a unified framework that encompasses previously proposed efficient coding models and extends to unique regimes. We show that optimizing neural responses to encode predictive information can lead them to either correlate or decorrelate their inputs, depending on the stimulus statistics; in contrast, at low noise, efficiently encoding the past always predicts decorrelation. Later, we investigate coding of naturalistic movies and show that qualitatively different types of visual motion tuning and levels of response sparsity are predicted, depending on whether the objective is to recover the past or predict the future. Our approach promises a way to explain the observed diversity of sensory neural responses, as due to multiple functional goals and constraints fulfilled by different cell types and/or circuits.

  12. Bilinearity in spatiotemporal integration of synaptic inputs.

    Directory of Open Access Journals (Sweden)

    Songting Li

    2014-12-01

    Full Text Available Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient [Formula: see text]. The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient [Formula: see text] is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse.

  13. Phonological coding during reading

    Science.gov (United States)

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  14. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  15. Discovering Clusters of Plagiarism in Students’ Source Codes

    Directory of Open Access Journals (Sweden)

    L. Moussiades

    2016-03-01

    Full Text Available Plagiarism in students’ source codes constitutes an important drawback for the educational process. In addition, plagiarism detection in source codes is time consuming and tiresome task. Therefore, many approaches for plagiarism detection have been proposed. Most of the aforementioned approaches receive as input a set of source files and calculate a similarity between each pair of the input set. However, the tutor often needs to detect the clusters of plagiarism, i.e. clusters of students’ assignments such as all assignments in a cluster derive from a common original. In this paper, we propose a novel plagiarism detection algorithm that receives as input a set of source codes and calculates the clusters of plagiarism. Experimental results show the efficiency of our approach and encourage us to further research.

  16. Reactor Physics Analysis Models for a CANDU Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hang Bok

    2007-10-15

    Canada deuterium uranium (CANDU) reactor physics analysis is typically performed in three steps. At first, macroscopic cross-sections of the reference lattice is produced by modeling the reference fuel channel. Secondly macroscopic cross-sections of reactivity devices in the reactor are generated. The macroscopic cross-sections of a reactivity device are calculated as incremental cross-sections by subtracting macroscopic cross-sections of a three-dimensional lattice without reactivity device from those of a three-dimensional lattice with a reactivity device. Using the macroscopic cross-sections of the reference lattice and incremental cross-sections of the reactivity devices, reactor physics calculations are performed. This report summarizes input data of typical CANDU reactor physics codes, which can be utilized for the future CANDU reactor physics analysis.

  17. Generating code adapted for interlinking legacy scalar code and extended vector code

    Science.gov (United States)

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  18. Decoding of Cyclic Codes,

    Science.gov (United States)

    INFORMATION THEORY, *DECODING), (* DATA TRANSMISSION SYSTEMS , DECODING), STATISTICAL ANALYSIS, STOCHASTIC PROCESSES, CODING, WHITE NOISE, NUMBER THEORY, CORRECTIONS, BINARY ARITHMETIC, SHIFT REGISTERS, CONTROL SYSTEMS, USSR

  19. ARC Code TI: ACCEPT

    Data.gov (United States)

    National Aeronautics and Space Administration — ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to...

  20. Diameter Perfect Lee Codes

    CERN Document Server

    Horak, Peter

    2011-01-01

    Lee codes have been intensively studied for more than 40 years. Interest in these codes has been triggered by the Golomb-Welch conjecture on the existence of perfect error-correcting Lee codes. In this paper we deal with the existence and enumeration of diameter perfect Lee codes. As main results we determine all q for which there exists a linear diameter-4 perfect Lee code of word length n over Z_{q}, and prove that for each n\\geq3 there are unaccountably many diameter-4 perfect Lee codes of word length n over Z. This is in a strict contrast with perfect error-correcting Lee codes of word length n over Z as there is a unique such code for n=3, and its is conjectured that this is always the case when 2n+1 is a prime. Diameter perfect Lee codes will be constructed by an algebraic construction that is based on a group homomorphism. This will allow us to design an efficient algorithm for their decoding.

  1. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  2. Physical Impairment

    Science.gov (United States)

    Trewin, Shari

    Many health conditions can lead to physical impairments that impact computer and Web access. Musculoskeletal conditions such as arthritis and cumulative trauma disorders can make movement stiff and painful. Movement disorders such as tremor, Parkinsonism and dystonia affect the ability to control movement, or to prevent unwanted movements. Often, the same underlying health condition also has sensory or cognitive effects. People with dexterity impairments may use a standard keyboard and mouse, or any of a wide range of alternative input mechanisms. Examples are given of the diverse ways that specific dexterity impairments and input mechanisms affect the fundamental actions of Web browsing. As the Web becomes increasingly sophisticated, and physically demanding, new access features at the Web browser and page level will be necessary.

  3. Stochasticity in Ca2+ increase in spines enables robust and sensitive information coding.

    Directory of Open Access Journals (Sweden)

    Takuya Koumura

    Full Text Available A dendritic spine is a very small structure (∼0.1 µm3 of a neuron that processes input timing information. Why are spines so small? Here, we provide functional reasons; the size of spines is optimal for information coding. Spines code input timing information by the probability of Ca2+ increases, which makes robust and sensitive information coding possible. We created a stochastic simulation model of input timing-dependent Ca2+ increases in a cerebellar Purkinje cell's spine. Spines used probability coding of Ca2+ increases rather than amplitude coding for input timing detection via stochastic facilitation by utilizing the small number of molecules in a spine volume, where information per volume appeared optimal. Probability coding of Ca2+ increases in a spine volume was more robust against input fluctuation and more sensitive to input numbers than amplitude coding of Ca2+ increases in a cell volume. Thus, stochasticity is a strategy by which neurons robustly and sensitively code information.

  4. Managing Input during Assistive Technology Product Design

    Science.gov (United States)

    Choi, Young Mi

    2011-01-01

    Many different sources of input are available to assistive technology innovators during the course of designing products. However, there is little information on which ones may be most effective or how they may be efficiently utilized within the design process. The aim of this project was to compare how three types of input--from simulation tools,…

  5. 39 CFR 3020.92 - Public input.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Public input. 3020.92 Section 3020.92 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL PRODUCT LISTS Requests Initiated by the Postal Service to Change the Mail Classification Schedule § 3020.92 Public input. The Commission shall publish Postal...

  6. Farmers\\' Perceived Agricultural Input Factors Influencing Adoption ...

    African Journals Online (AJOL)

    The study investigated agricultural input factors influencing adoption and production of food crops in Ondo State, Nigeria. Data from 120 randomly selected farmers were used for the study. Findings show that the major inputs used by the respondents are improved seeds (89.2%), fertilizer (66.7%) and agrochemicals ...

  7. Farmer and input marketer's involvement in researchextension ...

    African Journals Online (AJOL)

    This study determined the level of involvement of farmers and input marketers in the Research-Extension-Farmer-Input Linkage System (REFILS) continuum of activities in the Southeastern agro-ecological zone of Nigeria. Data were collected with the aid of structured questionnaire administered to 80 randomly selected ...

  8. EDP Applications to Musical Bibliography: Input Considerations

    Science.gov (United States)

    Robbins, Donald C.

    1972-01-01

    The application of Electronic Data Processing (EDP) has been a boon in the analysis and bibliographic control of music. However, an extra step of encoding must be undertaken for input of music. The best hope to facilitate musical input is the development of an Optical Character Recognition (OCR) music-reading machine. (29 references) (Author/NH)

  9. Income distributions in input-output models

    NARCIS (Netherlands)

    Steenge, Albert E.; Serrano, Monica

    2012-01-01

    The analysis of income distribution (ID) has traditionally been of prime importance for economists and policy-makers. However, the standard input-output (I-O) model is not particularly well equipped for studying current issues such as the consequences of decreasing access to primary inputs or the

  10. On {\\sigma}-LCD codes

    OpenAIRE

    Carlet, Claude; Mesnager, Sihem; Tang, Chunming; Qi, Yanfeng

    2017-01-01

    Linear complementary pairs (LCP) of codes play an important role in armoring implementations against side-channel attacks and fault injection attacks. One of the most common ways to construct LCP of codes is to use Euclidean linear complementary dual (LCD) codes. In this paper, we first introduce the concept of linear codes with $\\sigma$ complementary dual ($\\sigma$-LCD), which includes known Euclidean LCD codes, Hermitian LCD codes, and Galois LCD codes. As Euclidean LCD codes, $\\sigma$-LCD ...

  11. HYDRA-II: A hydrothermal analysis computer code: Volume 3, Verification/validation assessments

    Energy Technology Data Exchange (ETDEWEB)

    McCann, R.A.; Lowery, P.S.

    1987-10-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. Volume I - Equations and Numerics describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. This volume, Volume III - Verification/Validation Assessments, provides a comparison between the analytical solution and the numerical simulation for problems with a known solution. This volume also documents comparisons between the results of simulations of single- and multiassembly storage systems and actual experimental data. 11 refs., 55 figs., 13 tabs.

  12. A 90-W peak power GaN outphasing amplifier with optimum input signal conditioning

    NARCIS (Netherlands)

    Qureshi, J.H.; Pelk, M.J.; Marchetti, M.; Neo, W.C.E.; Gajadharsing, J.R.; Van der Heijden, M.P.; De Vreede, L.C.N.

    2009-01-01

    A 90-W peak-power 2.14-GHz improved GaN outphasing amplifier with 50.5% average efficiency for wideband code division multiple access (W-CDMA) signals is presented. Independent control of the branch amplifiers by two in-phase/quadrature modulators enables optimum outphasing and input power leveling,

  13. Atmospheric Nitrogen input to the Kattegat

    DEFF Research Database (Denmark)

    Asman, W.A.H.; Hertel, O.; Berkowicz, R.

    1995-01-01

    An overview is given of the processes involved in the atmospheric deposition of nitrogen compounds. These processes are incorporated in an atmospheric transport model that is used to calculate the nitrogen input to the Kattegat, the sea area between Denmark and Sweden. The model results show...... that the total atmospheric nitrogen input to the Kattegat is approximately 960 kg N km(-2) yr(-1). The nitrogen input to the Kattegat is dominated by the wet depositions of NHx (42%) and NOy (30%). The contribution from the dry deposition of NHx is 17% and that of the dry deposition of NOy is 11......%. The contribution of the atmospheric input of nitrogen to the Kattegat is about 30% of the total input including the net transport from other sea areas, runoff etc....

  14. Equalization and Decoding for Multiple-Input Multiple-Output Wireless Channels

    Directory of Open Access Journals (Sweden)

    Proakis John G

    2002-01-01

    Full Text Available We consider multiple-input, multiple-output (MIMO wireless communication systems that employ multiple transmit and receive antennas to increase the data rate and achieve diversity in fading multipath channels. We begin by focusing on an uncoded system and define optimal and suboptimal receiver structures for this system in Rayleigh fading with and without intersymbol interference. Next, we consider coded MIMO systems. We view the coded system as a serially concatenated convolutional code (SCCC in which the code and the multipath channel take on the roles of constituent codes. This enables us to analyze the performance using the same performance analysis tools as developed previously for SCCCs. Finally, we present an iterative ("turbo" MAP-based equalization and decoding scheme and evaluate its performance when applied to a system with transmit antennas and receive antennas. We show that by performing recursive precoding prior to transmission, significant interleaving gains can be realized compared to systems without precoding.

  15. NASA Lewis Steady-State Heat Pipe Code Architecture

    Science.gov (United States)

    Mi, Ye; Tower, Leonard K.

    2013-01-01

    NASA Glenn Research Center (GRC) has developed the LERCHP code. The PC-based LERCHP code can be used to predict the steady-state performance of heat pipes, including the determination of operating temperature and operating limits which might be encountered under specified conditions. The code contains a vapor flow algorithm which incorporates vapor compressibility and axially varying heat input. For the liquid flow in the wick, Darcy s formula is employed. Thermal boundary conditions and geometric structures can be defined through an interactive input interface. A variety of fluid and material options as well as user defined options can be chosen for the working fluid, wick, and pipe materials. This report documents the current effort at GRC to update the LERCHP code for operating in a Microsoft Windows (Microsoft Corporation) environment. A detailed analysis of the model is presented. The programming architecture for the numerical calculations is explained and flowcharts of the key subroutines are given

  16. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Pries-Heje, Lene; Dahlgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  17. Error Correcting Codes

    Indian Academy of Sciences (India)

    be fixed to define codes over such domains). New decoding schemes that take advantage of such connections can be devised. These may soon show up in a technique called code division multiple access (CDMA) which is proposed as a basis for digital cellular communication. CDMA provides a facility for many users to ...

  18. Codes of Conduct

    Science.gov (United States)

    Million, June

    2004-01-01

    Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…

  19. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...

  20. Code Generation = A* + BURS

    NARCIS (Netherlands)

    Nymeyer, Albert; Katoen, Joost P.; Westra, Ymte; Alblas, H.; Gyimóthy, Tibor

    1996-01-01

    A system called BURS that is based on term rewrite systems and a search algorithm A* are combined to produce a code generator that generates optimal code. The theory underlying BURS is re-developed, formalised and explained in this work. The search algorithm uses a cost heuristic that is derived

  1. Dress Codes for Teachers?

    Science.gov (United States)

    Million, June

    2004-01-01

    In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…

  2. Informal control code logic

    NARCIS (Netherlands)

    Bergstra, J.A.

    2010-01-01

    General definitions as well as rules of reasoning regarding control code production, distribution, deployment, and usage are described. The role of testing, trust, confidence and risk analysis is considered. A rationale for control code testing is sought and found for the case of safety critical

  3. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  4. Nuremberg code turns 60

    OpenAIRE

    Thieren, Michel; Mauron, Alexandre

    2007-01-01

    This month marks sixty years since the Nuremberg code – the basic text of modern medical ethics – was issued. The principles in this code were articulated in the context of the Nuremberg trials in 1947. We would like to use this anniversary to examine its ability to address the ethical challenges of our time.

  5. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 1. Error Correcting Codes The Hamming Codes. Priti Shankar. Series Article Volume 2 Issue 1 January ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  6. Optix: A Monte Carlo scintillation light transport code

    Energy Technology Data Exchange (ETDEWEB)

    Safari, M.J., E-mail: mjsafari@aut.ac.ir [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Afarideh, H. [Department of Energy Engineering and Physics, Amir Kabir University of Technology, PO Box 15875-4413, Tehran (Iran, Islamic Republic of); Ghal-Eh, N. [School of Physics, Damghan University, PO Box 36716-41167, Damghan (Iran, Islamic Republic of); Davani, F. Abbasi [Nuclear Engineering Department, Shahid Beheshti University, PO Box 1983963113, Tehran (Iran, Islamic Republic of)

    2014-02-11

    The paper reports on the capabilities of Monte Carlo scintillation light transport code Optix, which is an extended version of previously introduced code Optics. Optix provides the user a variety of both numerical and graphical outputs with a very simple and user-friendly input structure. A benchmarking strategy has been adopted based on the comparison with experimental results, semi-analytical solutions, and other Monte Carlo simulation codes to verify various aspects of the developed code. Besides, some extensive comparisons have been made against the tracking abilities of general-purpose MCNPX and FLUKA codes. The presented benchmark results for the Optix code exhibit promising agreements. -- Highlights: • Monte Carlo simulation of scintillation light transport in 3D geometry. • Evaluation of angular distribution of detected photons. • Benchmark studies to check the accuracy of Monte Carlo simulations.

  7. Directional hearing by linear summation of binaural inputs at the medial superior olive.

    Science.gov (United States)

    van der Heijden, Marcel; Lorteije, Jeannette A M; Plauška, Andrius; Roberts, Michael T; Golding, Nace L; Borst, J Gerard G

    2013-06-05

    Neurons in the medial superior olive (MSO) enable sound localization by their remarkable sensitivity to submillisecond interaural time differences (ITDs). Each MSO neuron has its own "best ITD" to which it responds optimally. A difference in physical path length of the excitatory inputs from both ears cannot fully account for the ITD tuning of MSO neurons. As a result, it is still debated how these inputs interact and whether the segregation of inputs to opposite dendrites, well-timed synaptic inhibition, or asymmetries in synaptic potentials or cellular morphology further optimize coincidence detection or ITD tuning. Using in vivo whole-cell and juxtacellular recordings, we show here that ITD tuning of MSO neurons is determined by the timing of their excitatory inputs. The inputs from both ears sum linearly, whereas spike probability depends nonlinearly on the size of synaptic inputs. This simple coincidence detection scheme thus makes accurate sound localization possible. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Measuring Input Thresholds on an Existing Board

    Science.gov (United States)

    Kuperman, Igor; Gutrich, Daniel G.; Berkun, Andrew C.

    2011-01-01

    A critical PECL (positive emitter-coupled logic) interface to Xilinx interface needed to be changed on an existing flight board. The new Xilinx input interface used a CMOS (complementary metal-oxide semiconductor) type of input, and the driver could meet its thresholds typically, but not in worst-case, according to the data sheet. The previous interface had been based on comparison with an external reference, but the CMOS input is based on comparison with an internal divider from the power supply. A way to measure what the exact input threshold was for this device for 64 inputs on a flight board was needed. The measurement technique allowed an accurate measurement of the voltage required to switch a Xilinx input from high to low for each of the 64 lines, while only probing two of them. Directly driving an external voltage was considered too risky, and tests done on any other unit could not be used to qualify the flight board. The two lines directly probed gave an absolute voltage threshold calibration, while data collected on the remaining 62 lines without probing gave relative measurements that could be used to identify any outliers. The PECL interface was forced to a long-period square wave by driving a saturated square wave into the ADC (analog to digital converter). The active pull-down circuit was turned off, causing each line to rise rapidly and fall slowly according to the input s weak pull-down circuitry. The fall time shows up as a change in the pulse width of the signal ready by the Xilinx. This change in pulse width is a function of capacitance, pulldown current, and input threshold. Capacitance was known from the different trace lengths, plus a gate input capacitance, which is the same for all inputs. The pull-down current is the same for all inputs including the two that are probed directly. The data was combined, and the Excel solver tool was used to find input thresholds for the 62 lines. This was repeated over different supply voltages and

  9. Smartphone Text Input Method Performance, Usability, and Preference With Younger and Older Adults.

    Science.gov (United States)

    Smith, Amanda L; Chaparro, Barbara S

    2015-09-01

    User performance, perceived usability, and preference for five smartphone text input methods were compared with younger and older novice adults. Smartphones are used for a variety of functions other than phone calls, including text messaging, e-mail, and web browsing. Research comparing performance with methods of text input on smartphones reveals a high degree of variability in reported measures, procedures, and results. This study reports on a direct comparison of five of the most common input methods among a population of younger and older adults, who had no experience with any of the methods. Fifty adults (25 younger, 18-35 years; 25 older, 60-84 years) completed a text entry task using five text input methods (physical Qwerty, onscreen Qwerty, tracing, handwriting, and voice). Entry and error rates, perceived usability, and preference were recorded. Both age groups input text equally fast using voice input, but older adults were slower than younger adults using all other methods. Both age groups had low error rates when using physical Qwerty and voice, but older adults committed more errors with the other three methods. Both younger and older adults preferred voice and physical Qwerty input to the remaining methods. Handwriting consistently performed the worst and was rated lowest by both groups. Voice and physical Qwerty input methods proved to be the most effective for both younger and older adults, and handwriting input was the least effective overall. These findings have implications to the design of future smartphone text input methods and devices, particularly for older adults. © 2015, Human Factors and Ergonomics Society.

  10. Weight Distributions for Turbo Codes Using Random and Nonrandom Permutations

    Science.gov (United States)

    Dolinar, S.; Divsalar, D.

    1995-04-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of the constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as p 2N, where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are "semirandom" permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  11. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  12. Quantum Synchronizable Codes From Quadratic Residue Codes and Their Supercodes

    OpenAIRE

    Xie, Yixuan; Yuan, Jinhong; Fujiwara, Yuichiro

    2014-01-01

    Quantum synchronizable codes are quantum error-correcting codes designed to correct the effects of both quantum noise and block synchronization errors. While it is known that quantum synchronizable codes can be constructed from cyclic codes that satisfy special properties, only a few classes of cyclic codes have been proved to give promising quantum synchronizable codes. In this paper, using quadratic residue codes and their supercodes, we give a simple construction for quantum synchronizable...

  13. Differences in peripheral sensory input to the olfactory bulb between male and female mice

    Science.gov (United States)

    Kass, Marley D.; Czarnecki, Lindsey A.; Moberly, Andrew H.; McGann, John P.

    2017-04-01

    Female mammals generally have a superior sense of smell than males, but the biological basis of this difference is unknown. Here, we demonstrate sexually dimorphic neural coding of odorants by olfactory sensory neurons (OSNs), primary sensory neurons that physically contact odor molecules in the nose and provide the initial sensory input to the brain’s olfactory bulb. We performed in vivo optical neurophysiology to visualize odorant-evoked OSN synaptic output into olfactory bub glomeruli in unmanipulated (gonad-intact) adult mice from both sexes, and found that in females odorant presentation evoked more rapid OSN signaling over a broader range of OSNs than in males. These spatiotemporal differences enhanced the contrast between the neural representations of chemically related odorants in females compared to males during stimulus presentation. Removing circulating sex hormones makes these signals slower and less discriminable in females, while in males they become faster and more discriminable, suggesting opposite roles for gonadal hormones in influencing male and female olfactory function. These results demonstrate that the famous sex difference in olfactory abilities likely originates in the primary sensory neurons, and suggest that hormonal modulation of the peripheral olfactory system could underlie differences in how males and females experience the olfactory world.

  14. Integrating agronomic principles into production function estimation: A dichotomy of growth inputs and facilitating inputs

    NARCIS (Netherlands)

    Zhengfei, G.; Oude Lansink, A.G.J.M.; Ittersum, van M.K.; Wossink, G.A.A.

    2006-01-01

    This article presents a general conceptual framework for integrating agronomic principles into economic production analysis. We categorize inputs in crop production into growth inputs and facilitating inputs. Based on this dichotomy we specify an asymmetric production function. The robustness of the

  15. Special issue on network coding

    Science.gov (United States)

    Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly

    2017-12-01

    Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.

  16. Computer Code for Nanostructure Simulation

    Science.gov (United States)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  17. Modelling of impurity transport and plasma-wall interaction in fusion devices with the ERO code: basics of the code and examples of application

    Energy Technology Data Exchange (ETDEWEB)

    Kirschner, A.; Borodin, D.; Brezinsek, S.; Linsmeier, C.; Romazanov, J. [Forschungszentrum Juelich GmbH, Institut fuer Energie- und Klimaforschung - Plasmaphysik, Juelich (Germany); Tskhakaya, D. [Fusion rate at OeAW, Institute of Applied Physics, TU Wien (Austria); Institute of Theoretical Physics, University of Innsbruck (Austria); Kawamura, G. [National Institute for Fusion Science, Gifu (Japan); Ding, R. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China)

    2016-08-15

    The 3D ERO code, which simulates plasma-wall interaction and impurity transport in magnetically confined fusion-relevant devices is described. As application, prompt deposition of eroded tungsten has been simulated at surfaces with shallow magnetic field of 3 T. Dedicated PIC simulations have been performed to calculate the characteristics of the sheath in front of plasma-exposed surfaces to use as input for these ERO simulations. Prompt deposition of tungsten reaches 100% at the highest electron temperature and density. In comparison to more simplified assumptions for the sheath the amount of prompt deposition is in general smaller if the PIC-calculated sheath is used. Due to friction with the background plasma the impact energy of deposited tungsten can be significantly larger than the energy gained in the sheath potential. (copyright 2016 The Authors. Contributions to Plasma Physics published by Wiley-VCH Verlag GmbH and Co. KGaA Weinheim. This)

  18. The input-output relationship approach to structural identifiability analysis.

    Science.gov (United States)

    Bearup, Daniel J; Evans, Neil D; Chappell, Michael J

    2013-02-01

    Analysis of the identifiability of a given model system is an essential prerequisite to the determination of model parameters from physical data. However, the tools available for the analysis of non-linear systems can be limited both in applicability and by computational intractability for any but the simplest of models. The input-output relation of a model summarises the input-output structure of the whole system and as such provides the potential for an alternative approach to this analysis. However for this approach to be valid it is necessary to determine whether the monomials of a differential polynomial are linearly independent. A simple test for this property is presented in this work. The derivation and analysis of this relation can be implemented symbolically within Maple. These techniques are applied to analyse classical models from biomedical systems modelling and those of enzyme catalysed reaction schemes. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  19. Pyramid image codes

    Science.gov (United States)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  20. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  1. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  2. Input data to run Landis-II

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — The data are input data files to run the forest simulation model Landis-II for Isle Royale National Park. Files include: a) Initial_Comm, which includes the location...

  3. Input-output rearrangement of isolated converters

    DEFF Research Database (Denmark)

    Madsen, Mickey Pierre; Kovacevic, Milovan; Mønster, Jakob Døllner

    2015-01-01

    This paper presents a new way of rearranging the input and output of isolated converters. The new arrangement posses several advantages, as increased voltage range, higher power handling capabilities, reduced voltage stress and improved efficiency, for applications where galvanic isolation...

  4. CBM First-level Event Selector Input Interface Demonstrator

    Science.gov (United States)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  5. Utilization of recently developed codes for high power Brayton and Rankine cycle power systems

    Science.gov (United States)

    Doherty, Michael P.

    1993-01-01

    Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.

  6. Module description of TOKAMAK equilibrium code MEUDAS

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Masaei; Hayashi, Nobuhiko; Matsumoto, Taro; Ozeki, Takahisa [Japan Atomic Energy Research Inst., Naka, Ibaraki (Japan). Naka Fusion Research Establishment

    2002-01-01

    The analysis of an axisymmetric MHD equilibrium serves as a foundation of TOKAMAK researches, such as a design of devices and theoretical research, the analysis of experiment result. For this reason, also in JAERI, an efficient MHD analysis code has been developed from start of TOKAMAK research. The free boundary equilibrium code ''MEUDAS'' which uses both the DCR method (Double-Cyclic-Reduction Method) and a Green's function can specify the pressure and the current distribution arbitrarily, and has been applied to the analysis of a broad physical subject as a code having rapidity and high precision. Also the MHD convergence calculation technique in ''MEUDAS'' has been built into various newly developed codes. This report explains in detail each module in ''MEUDAS'' for performing convergence calculation in solving the MHD equilibrium. (author)

  7. Advanced Imaging Optics Utilizing Wavefront Coding.

    Energy Technology Data Exchange (ETDEWEB)

    Scrymgeour, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boye, Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Adelsberger, Kathleen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  8. Sandia National Laboratories analysis code data base

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, C.W.

    1994-11-01

    Sandia National Laboratories, mission is to solve important problems in the areas of national defense, energy security, environmental integrity, and industrial technology. The Laboratories` strategy for accomplishing this mission is to conduct research to provide an understanding of the important physical phenomena underlying any problem, and then to construct validated computational models of the phenomena which can be used as tools to solve the problem. In the course of implementing this strategy, Sandia`s technical staff has produced a wide variety of numerical problem-solving tools which they use regularly in the design, analysis, performance prediction, and optimization of Sandia components, systems and manufacturing processes. This report provides the relevant technical and accessibility data on the numerical codes used at Sandia, including information on the technical competency or capability area that each code addresses, code ``ownership`` and release status, and references describing the physical models and numerical implementation.

  9. Significance of Input Correlations in Striatal Function

    Science.gov (United States)

    Yim, Man Yi; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480

  10. George Gamow and the Genetic Code

    Indian Academy of Sciences (India)

    and most famous, paper, "A Structure for Deoxyribose Nucleic. Acid". In it they ... quantum mechanics and nuclear physics, especially for a bril- liant explanation of ... a Genetic Code, that would relate the hereditary information carried in DNA to the stuff that built bodies, proteins. Schrodinger used the explicit example of the.

  11. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  12. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  13. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  14. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  15. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  16. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    , in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded......The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access...

  17. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  18. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  19. Hybrid codes: Past, present and future

    Science.gov (United States)

    Winske, D.; Yin, L.

    Hybrid codes, in which the ions are treated kinetically and the electrons are assumed to be a massless fluid, have been widely used in space physics over the past two decades. These codes are used to model phenomena that occur on ion inertia and gyroradius scales, which fall between longer scales obtained by magnetohydrodynamic simulations and shorter scales attainable by full particle simulations. In this tutorial, the assumptions and equations of the hybrid model are discussed along with some most commonly used numerical implementations. Examples of results of two-dimensional hybrid simulations are used to illustrate the method, to indicate some of the tradeoffs that need to be addressed in a realistic calculation, and to demonstrate the utility of the technique for problems of contemporary interest. Some speculation about the future direction of space physics research using hybrid codes is also provided. 1 Hybrid codes: Past Generally, the term "hybrid code" in plasma physics can refer to any simulation model in which one or more of the plasma species are treated as a single or multiple fluids, while the remaining species are treated kinetically as particles. The plasma can be coupled to the electromagnetic fields in a variety of ways: full Maxwell equations, low-frequency Darwin model, electrostatic only, etc. In this tutorial, we shall concentrate only on the most common type of hybrid code used in space plasmas: where all the ions are treated kinetically, the electrons are assumed to be an inertia-less and quasineutral fluid, and the electromagnetic fields are treated in the low-frequency approximation. Because this tutorial is being presented in the context of the International School for Space Simulation (ISSS), we will mostly restrict the discussion of "past" uses of hybrid methods in space physics to the articles published in the previous schools. Those articles give appropriate and timely

  20. Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT

    Science.gov (United States)

    Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.

    2015-01-01

    This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.

  1. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  2. Code de conduite

    International Development Research Centre (IDRC) Digital Library (Canada)

    irocca

    son point de vue, dans un esprit d'accueil et de respect. NOTRE CODE DE CONDUITE. Le CRDI s'engage à adopter un comportement conforme aux normes d'éthique les plus strictes dans toutes ses activités. Le Code de conduite reflète notre mission, notre philosophie en matière d'emploi et les résultats des discussions ...

  3. Open Coding Descriptions

    Directory of Open Access Journals (Sweden)

    Barney G. Glaser, PhD, Hon PhD

    2016-12-01

    Full Text Available Open coding is a big source of descriptions that must be managed and controlled when doing GT research. The goal of generating a GT is to generate an emergent set of concepts and their properties that fit and work with relevancy to be integrated into a theory. To achieve this goal, the researcher begins his research with open coding, that is coding all his data in every possible way. The consequence of this open coding is a multitude of descriptions for possible concepts that often do not fit in the emerging theory. Thus in this case the researcher ends up with many irrelevant descriptions for concepts that do not apply. To dwell on descriptions for inapplicable concepts ruins the GT theory as it starts. It is hard to stop. Confusion easily sets in. Switching the study to a QDA is a simple rescue. Rigorous focusing on emerging concepts is vital before being lost in open coding descriptions. It is important, no matter how interesting the description may become. Once a core is possible, selective coding can start which will help control against being lost in multiple descriptions.

  4. Effects of Heat Input on Microstructure, Corrosion and Mechanical Characteristics of Welded Austenitic and Duplex Stainless Steels: A Review

    Directory of Open Access Journals (Sweden)

    Ghusoon Ridha Mohammed

    2017-01-01

    Full Text Available The effects of input heat of different welding processes on the microstructure, corrosion, and mechanical characteristics of welded duplex stainless steel (DSS are reviewed. Austenitic stainless steel (ASS is welded using low-heat inputs. However, owing to differences in the physical metallurgy between ASS and DSS, low-heat inputs should be avoided for DSS. This review highlights the differences in solidification mode and transformation characteristics between ASS and DSS with regard to the heat input in welding processes. Specifically, many studies about the effects of heat energy input in welding process on the pitting corrosion, intergranular stress, stresscorrosion cracking, and mechanical properties of weldments of DSS are reviewed.

  5. Ripple Design of LT Codes for BIAWGN Channels

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2014-01-01

    This paper presents a novel framework, which enables a design of rateless codes for binary input additive white Gaussian noise (BIAWGN) channels, using the ripple-based approach known from the works for the binary erasure channel (BEC). We reveal that several aspects of the analytical results from...... the BEC also hold in BIAWGN channels. The presented framework is applied in a code design example, which shows promising results compared to existing work. In particular it shows a great robustness towards variations in the signal-to-noise power ratio (SNR), contrary to existing codes....

  6. Conversion of the bounce-averaged Fokker-Planck code to conversion form

    Science.gov (United States)

    Rognlien, T. D.

    1985-07-01

    This report describes a major modification to the bounce-averaged Fokker-Planck code of Cutler, et al. The new version of the code is written in conservation form which results in the line density being conserved exactly except at phase space boundaries. The notation and procedure for writing the code in conservation form closely follows the work of Mirin on the square well code Hybrid II, and Kerbel and McCoy on their bounce-average code CQL. Much of the original code has been perserved, and the input is the same as before; old input files should run on the new version. The major modifications to the code have occurred in subroutines COFF, IADVANCE, and RFTERMS. A new subroutine, FLUX, has been added to compute the flux across loss boundaries.

  7. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  8. The multidimensional self-adaptive grid code, SAGE

    Science.gov (United States)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  9. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Directory of Open Access Journals (Sweden)

    Chapoutier Nicolas

    2017-01-01

    Full Text Available In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics. Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  10. AREVA Developments for an Efficient and Reliable use of Monte Carlo codes for Radiation Transport Applications

    Science.gov (United States)

    Chapoutier, Nicolas; Mollier, François; Nolin, Guillaume; Culioli, Matthieu; Mace, Jean-Reynald

    2017-09-01

    In the context of the rising of Monte Carlo transport calculations for any kind of application, AREVA recently improved its suite of engineering tools in order to produce efficient Monte Carlo workflow. Monte Carlo codes, such as MCNP or TRIPOLI, are recognized as reference codes to deal with a large range of radiation transport problems. However the inherent drawbacks of theses codes - laboring input file creation and long computation time - contrast with the maturity of the treatment of the physical phenomena. The goals of the recent AREVA developments were to reach similar efficiency as other mature engineering sciences such as finite elements analyses (e.g. structural or fluid dynamics). Among the main objectives, the creation of a graphical user interface offering CAD tools for geometry creation and other graphical features dedicated to the radiation field (source definition, tally definition) has been reached. The computations times are drastically reduced compared to few years ago thanks to the use of massive parallel runs, and above all, the implementation of hybrid variance reduction technics. From now engineering teams are capable to deliver much more prompt support to any nuclear projects dealing with reactors or fuel cycle facilities from conceptual phase to decommissioning.

  11. Traveling-wave-tube simulation: The IBC (Interactive Beam-Circuit) code

    Energy Technology Data Exchange (ETDEWEB)

    Morey, I.J.; Birdsall, C.K.

    1989-09-26

    Interactive Beam-Circuit (IBC) is a one-dimensional many particle simulation code which has been developed to run interactively on a PC or Workstation, and displaying most of the important physics of a traveling-wave-tube. The code is a substantial departure from previous efforts, since it follows all of the particles in the tube, rather than just those in one wavelength, as commonly done. This step allows for nonperiodic inputs in time, a nonuniform line and a large set of spatial diagnostics. The primary aim is to complement a microwave tube lecture course, although past experience has shown that such codes readily become research tools. Simple finite difference methods are used to model the fields of the coupled slow-wave transmission line. The coupling between the beam and the transmission line is based upon the finite difference equations of Brillouin. The space-charge effects are included, in a manner similar to that used by Hess; the original part is use of particle-in-cell techniques to model the space-charge fields. 11 refs., 11 figs.

  12. 76 FR 56413 - Building Energy Codes Cost Analysis

    Science.gov (United States)

    2011-09-13

    ... ft high ceilings. Area above unconditioned 1200 ft\\2\\ Over a vented space. crawlspace or... public input on this issue. ] Estimating the Cost Effectiveness of Code Changes Economic Metrics To Be... calculate three metrics. Life-cycle cost. Simple payback period. Cash flow. Life-cycle cost (LCC) is the...

  13. PRESTO-PREP: a data preprocessor for the PRESTO-II code

    Energy Technology Data Exchange (ETDEWEB)

    Bell, M.A.; Emerson, C.J.; Fields, D.E.

    1984-07-01

    PRESTO-II is a computer code developed to evaluate possible health effects from shallow land disposal of low level radioactive wastes. PRESTO-PREP is a data preprocessor that has been developed to expedite the formation of input data sets for PRESTO-II. PRESTO-PREP utilizes a library of nuclide and risk-specific data. Given an initial waste inventory, the code creates the radionuclide portion of the associated input data set for PRESTO-II. 2 references.

  14. Probabilistic analysis of PWR and BWR fuel rod performance using the code CASINO-SLEUTH

    Energy Technology Data Exchange (ETDEWEB)

    Bull, A.J.

    1987-05-01

    This paper presents a brief description of the Monte Carlo and response surface techniques used in the code, and a probabilistic analysis of fuel rod performance in PWR and BWR applications. The analysis shows that fission gas release predictions are very sensitive to changes in certain of the code's inputs, identifies the most dominant input parameters and compares their effects in the two cases.

  15. Temporal firing reliability in response to periodic synaptic inputs

    Science.gov (United States)

    Hunter, John D.; Milton, John G.

    1998-03-01

    Reliable spike timing in the presence of noise is a prerequisite for a spike timing code. Previously we demonstrated that there is an intimate relationship between the phase locked firing patterns and spike timing reliability in the presence of noise: stable 1:m phase-locking generate reliable firing times; n:m phase-locked solutions where n neq 1 generate significantly less reliable spike times, where n is the number of spikes in m cycles of the stimulus. Here we compare spike timing reliability in an Aplysia motoneuron to that in a leaky integrate-and-fire neuron receiving either realistic periodic excitatory (EPSC) or inhibitory (IPSC) post-synaptic currents. For the same frequency and for identical synaptic time courses, EPSCs and IPSCs have opposite effects on spike timing reliability. This effect is shown to be a direct consequence of changes in the DC component of the input. Thus spike-time reliability is sensitively controlled by the interplay between the frequency and DC component of input to the neuron.

  16. Analysis of LT Codes with Unequal Recovery Time

    CERN Document Server

    Sørensen, Jesper H; Østergaard, Jan

    2012-01-01

    In this paper we analyze a specific class of rateless codes, called LT codes with unequal recovery time. These codes provide the option of prioritizing different segments of the transmitted data over other. The result is that segments are decoded in stages during the rateless transmission, where higher prioritized segments are decoded at lower overhead. Our analysis focuses on quantifying the expected amount of received symbols, which are redundant already upon arrival, i.e. all input symbols contained in the received symbols have already been decoded. This analysis gives novel insights into the probabilistic mechanisms of LT codes with unequal recovery time, which has not yet been available in the literature. We show that while these rateless codes successfully provide the unequal recovery time, they do so at a significant price in terms of redundancy in the lower prioritized segments. We propose and analyze a modification where a single intermediate feedback is transmitted, when the first segment is decoded...

  17. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Moerman Ingrid

    2007-01-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  18. Scalable Multiple-Description Image Coding Based on Embedded Quantization

    Directory of Open Access Journals (Sweden)

    Augustin I. Gavrilescu

    2007-02-01

    Full Text Available Scalable multiple-description (MD coding allows for fine-grain rate adaptation as well as robust coding of the input source. In this paper, we present a new approach for scalable MD coding of images, which couples the multiresolution nature of the wavelet transform with the robustness and scalability features provided by embedded multiple-description scalar quantization (EMDSQ. Two coding systems are proposed that rely on quadtree coding to compress the side descriptions produced by EMDSQ. The proposed systems are capable of dynamically adapting the bitrate to the available bandwidth while providing robustness to data losses. Experiments performed under different simulated network conditions demonstrate the effectiveness of the proposed scalable MD approach for image streaming over error-prone channels.

  19. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  20. Bit-coded regular expression parsing

    DEFF Research Database (Denmark)

    Nielsen, Lasse; Henglein, Fritz

    2011-01-01

    Regular expression parsing is the problem of producing a parse tree of a string for a given regular expression. We show that a compact bit representation of a parse tree can be produced efficiently, in time linear in the product of input string size and regular expression size, by simplifying...... the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...