WorldWideScience

Sample records for code typical a-level

  1. A multi-level code for metallurgical effects in metal-forming processes

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, P.A.; Silling, S.A. [Sandia National Labs., Albuquerque, NM (United States). Computational Physics and Mechanics Dept.; Hughes, D.A.; Bammann, D.J.; Chiesa, M.L. [Sandia National Labs., Livermore, CA (United States)

    1997-08-01

    The authors present the final report on a Laboratory-Directed Research and Development (LDRD) project, A Multi-level Code for Metallurgical Effects in metal-Forming Processes, performed during the fiscal years 1995 and 1996. The project focused on the development of new modeling capabilities for simulating forging and extrusion processes that typically display phenomenology occurring on two different length scales. In support of model fitting and code validation, ring compression and extrusion experiments were performed on 304L stainless steel, a material of interest in DOE nuclear weapons applications.

  2. Typical performance of regular low-density parity-check codes over general symmetric channels

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Toshiyuki [Department of Electronics and Information Engineering, Tokyo Metropolitan University, 1-1 Minami-Osawa, Hachioji-shi, Tokyo 192-0397 (Japan); Saad, David [Neural Computing Research Group, Aston University, Aston Triangle, Birmingham B4 7ET (United Kingdom)

    2003-10-31

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models.

  3. Typical performance of regular low-density parity-check codes over general symmetric channels

    International Nuclear Information System (INIS)

    Tanaka, Toshiyuki; Saad, David

    2003-01-01

    Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. Relationship between the free energy in statistical-mechanics approach and the mutual information used in the information-theory literature is established within a general framework; Gallager and MacKay-Neal codes are studied as specific examples of LDPC codes. It is shown that basic properties of these codes known for particular channels, including their potential to saturate Shannon's bound, hold for general symmetric channels. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel models

  4. Simulation of a severe accident at a typical PWR due to break of a hot leg ECCS line using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Min; Sabundjian, Gaianê, E-mail: smlee@ipen.br, E-mail: gdjian@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-11-01

    The aim of this work was to simulate a severe accident at a typical PWR caused by break in Emergency Core Cooling System (ECCS) line of a hot leg using the MELCOR code. The nodalization of this typical PWR was elaborated by the Global Research for Safety (GRS) and provided to the CNEN for analysis of the severe accidents at the Angra 2, which is similar to that PWR. Although both of them are not identical the results obtained for that typical PWR may be valuable because of the lack of officially published calculation for Angra 2. Relevant parameters such as pressure, temperature and water level in various control volumes after the break in the hot leg were calculated as well as degree of core degradation and hydrogen concentration in containment. The result obtained in this work could be considered satisfactory in the sense that the physical phenomena reproduced by the simulation were in general very reasonable, and most of the events occurred within acceptable time intervals. However, the uncertainty analysis was not carried out in this work. Furthermore, this scenario could be used as a base for the study of the effectiveness of some preventive or/and mitigating measures of Severe Accident Management (SAMG) by adding associated conditions for each measure in its input. (author)

  5. Simulation of a severe accident at a typical PWR due to break of a hot leg ECCS line using MELCOR code

    International Nuclear Information System (INIS)

    Lee, Seung Min; Sabundjian, Gaianê

    2017-01-01

    The aim of this work was to simulate a severe accident at a typical PWR caused by break in Emergency Core Cooling System (ECCS) line of a hot leg using the MELCOR code. The nodalization of this typical PWR was elaborated by the Global Research for Safety (GRS) and provided to the CNEN for analysis of the severe accidents at the Angra 2, which is similar to that PWR. Although both of them are not identical the results obtained for that typical PWR may be valuable because of the lack of officially published calculation for Angra 2. Relevant parameters such as pressure, temperature and water level in various control volumes after the break in the hot leg were calculated as well as degree of core degradation and hydrogen concentration in containment. The result obtained in this work could be considered satisfactory in the sense that the physical phenomena reproduced by the simulation were in general very reasonable, and most of the events occurred within acceptable time intervals. However, the uncertainty analysis was not carried out in this work. Furthermore, this scenario could be used as a base for the study of the effectiveness of some preventive or/and mitigating measures of Severe Accident Management (SAMG) by adding associated conditions for each measure in its input. (author)

  6. SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code

    Directory of Open Access Journals (Sweden)

    Richard Stahl

    2007-01-01

    Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.

  7. Multi-level trellis coded modulation and multi-stage decoding

    Science.gov (United States)

    Costello, Daniel J., Jr.; Wu, Jiantian; Lin, Shu

    1990-01-01

    Several constructions for multi-level trellis codes are presented and many codes with better performance than previously known codes are found. These codes provide a flexible trade-off between coding gain, decoding complexity, and decoding delay. New multi-level trellis coded modulation schemes using generalized set partitioning methods are developed for Quadrature Amplitude Modulation (QAM) and Phase Shift Keying (PSK) signal sets. New rotationally invariant multi-level trellis codes which can be combined with differential encoding to resolve phase ambiguity are presented.

  8. On decoding of multi-level MPSK modulation codes

    Science.gov (United States)

    Lin, Shu; Gupta, Alok Kumar

    1990-01-01

    The decoding problem of multi-level block modulation codes is investigated. The hardware design of soft-decision Viterbi decoder for some short length 8-PSK block modulation codes is presented. An effective way to reduce the hardware complexity of the decoder by reducing the branch metric and path metric, using a non-uniform floating-point to integer mapping scheme, is proposed and discussed. The simulation results of the design are presented. The multi-stage decoding (MSD) of multi-level modulation codes is also investigated. The cases of soft-decision and hard-decision MSD are considered and their performance are evaluated for several codes of different lengths and different minimum squared Euclidean distances. It is shown that the soft-decision MSD reduces the decoding complexity drastically and it is suboptimum. The hard-decision MSD further simplifies the decoding while still maintaining a reasonable coding gain over the uncoded system, if the component codes are chosen properly. Finally, some basic 3-level 8-PSK modulation codes using BCH codes as component codes are constructed and their coding gains are found for hard decision multistage decoding.

  9. Multi-stage decoding of multi-level modulation codes

    Science.gov (United States)

    Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.

    1991-01-01

    Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).

  10. Two-Level Semantics and Code Generation

    DEFF Research Database (Denmark)

    Nielson, Flemming; Nielson, Hanne Riis

    1988-01-01

    A two-level denotational metalanguage that is suitable for defining the semantics of Pascal-like languages is presented. The two levels allow for an explicit distinction between computations taking place at compile-time and computations taking place at run-time. While this distinction is perhaps...... not absolutely necessary for describing the input-output semantics of programming languages, it is necessary when issues such as data flow analysis and code generation are considered. For an example stack-machine, the authors show how to generate code for the run-time computations and still perform the compile...

  11. Bi-level image compression with tree coding

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  12. Multi-stage decoding for multi-level block modulation codes

    Science.gov (United States)

    Lin, Shu

    1991-01-01

    In this paper, we investigate various types of multi-stage decoding for multi-level block modulation codes, in which the decoding of a component code at each stage can be either soft-decision or hard-decision, maximum likelihood or bounded-distance. Error performance of codes is analyzed for a memoryless additive channel based on various types of multi-stage decoding, and upper bounds on the probability of an incorrect decoding are derived. Based on our study and computation results, we find that, if component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. In particular, we find that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum decoding of the overall code is very small: only a fraction of dB loss in SNR at the probability of an incorrect decoding for a block of 10(exp -6). Multi-stage decoding of multi-level modulation codes really offers a way to achieve the best of three worlds, bandwidth efficiency, coding gain, and decoding complexity.

  13. Lossy/lossless coding of bi-level images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1997-01-01

    Summary form only given. We present improvements to a general type of lossless, lossy, and refinement coding of bi-level images (Martins and Forchhammer, 1996). Loss is introduced by flipping pixels. The pixels are coded using arithmetic coding of conditional probabilities obtained using a template...... as is known from JBIG and proposed in JBIG-2 (Martins and Forchhammer). Our new state-of-the-art results are obtained using the more general free tree instead of a template. Also we introduce multiple refinement template coding. The lossy algorithm is analogous to the greedy `rate...

  14. Analysis of radiological consequences in a typical BWR with a mark-II containment

    International Nuclear Information System (INIS)

    Funayama, Kyoko; Kajimoto, Mitsuhiro

    2003-01-01

    INS/NUPEC in Japan has been carrying out the Level 3 PSA program. In the program, the MACCS2 code has been extensively applied to analyze radiological consequences for typical BWR and PWR plants in Japan. The present study deals with analysis of effects of the AMs, which were implemented by industries, on radiological consequence for a typical BWR with a Mark-II containment. In the present study, source terms and their frequencies of source terms were used based on results of Level 2 PSA taking into account AM countermeasures. Radiological consequences were presented with dose risks (Sv/ry), which were multiplied doses (Sv) by containment damage frequencies (/ry), and timing of radionuclides release to the environment. The results of the present study indicated that the dose risks became negligible in most cases taking AM countermeasures and evacuations. (author)

  15. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  16. SYMBOL LEVEL DECODING FOR DUO-BINARY TURBO CODES

    Directory of Open Access Journals (Sweden)

    Yogesh Beeharry

    2017-05-01

    Full Text Available This paper investigates the performance of three different symbol level decoding algorithms for Duo-Binary Turbo codes. Explicit details of the computations involved in the three decoding techniques, and a computational complexity analysis are given. Simulation results with different couple lengths, code-rates, and QPSK modulation reveal that the symbol level decoding with bit-level information outperforms the symbol level decoding by 0.1 dB on average in the error floor region. Moreover, a complexity analysis reveals that symbol level decoding with bit-level information reduces the decoding complexity by 19.6 % in terms of the total number of computations required for each half-iteration as compared to symbol level decoding.

  17. Generation of Efficient High-Level Hardware Code from Dataflow Programs

    OpenAIRE

    Siret , Nicolas; Wipliez , Matthieu; Nezan , Jean François; Palumbo , Francesca

    2012-01-01

    High-level synthesis (HLS) aims at reducing the time-to-market by providing an automated design process that interprets and compiles high-level abstraction programs into hardware. However, HLS tools still face limitations regarding the performance of the generated code, due to the difficulties of compiling input imperative languages into efficient hardware code. Moreover the hardware code generated by the HLS tools is usually target-dependant and at a low level of abstraction (i.e. gate-level...

  18. GRABGAM: A Gamma Analysis Code for Ultra-Low-Level HPGe SPECTRA

    Energy Technology Data Exchange (ETDEWEB)

    Winn, W.G.

    1999-07-28

    The GRABGAM code has been developed for analysis of ultra-low-level HPGe gamma spectra. The code employs three different size filters for the peak search, where the largest filter provides best sensitivity for identifying low-level peaks and the smallest filter has the best resolution for distinguishing peaks within a multiplet. GRABGAM basically generates an integral probability F-function for each singlet or multiplet peak analysis, bypassing the usual peak fitting analysis for a differential f-function probability model. Because F is defined by the peak data, statistical limitations for peak fitting are avoided; however, the F-function does provide generic values for peak centroid, full width at half maximum, and tail that are consistent with a Gaussian formalism. GRABGAM has successfully analyzed over 10,000 customer samples, and it interfaces with a variety of supplementary codes for deriving detector efficiencies, backgrounds, and quality checks.

  19. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  20. Application of improved air transport data and wall transmission/reflection data in the SKYSINE code to typical BWR turbine skyshine

    Energy Technology Data Exchange (ETDEWEB)

    Tayama, Ryuichi; Hayashi, Katsumi [Hitachi Engineering Co. Ltd., Ibaraki (Japan); Hirayama, Hideo [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Sakamoto, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Harima, Yoshiko; Ishikawa, Satoshi [CRC Research Institute Inc., Tokyo (Japan); Hayashida, Yoshihisa [Toshiba Corp., Kawasaki, Kanagawa (Japan); Nemoto, Makoto [Visible Information Center, Tokai, Ibaraki (Japan); Sato, Osamu [Mitsubishi Research Inst., Inc., Tokyo (Japan)

    2000-03-01

    Three basic sets of data, i.e. air transport data and material transmission/reflection data, included in the SKYSHINE program have been improved using up-to-data and methods, and applied to skyshine dose calculations for a typical BWR turbine building. The direct and skyshine dose rates with the original SKYSHINE code show good agreements with MCNP Monte-Carlo calculations except for the distances less than 0.1 km. The results for the improved SKYSHINE code also have agreements with the MCNP code within 10-20%. The discrepancy of 10-20% can be due to the improved concrete transmission data at small incident and exit angles. We still improve the three sets of data and investigate with different calculational models to get more accurate results. (author)

  1. Psacoin level 1A intercomparison probabilistic system assessment code (PSAC) user group

    International Nuclear Information System (INIS)

    Nies, A.; Laurens, J.M.; Galson, D.A.; Webster, S.

    1990-01-01

    This report describes an international code intercomparison exercise conducted by the NEA Probabilistic System Assessment Code (PSAC) User Group. The PSACOIN Level 1A exercise is the third of a series designed to contribute to the verification of probabilistic codes that may be used in assessing the safety of radioactive waste disposal systems or concepts. Level 1A is based on a more realistic system model than that used in the two previous exercises, and involves deep geological disposal concepts with a relatively complex structure of the repository vault. The report compares results and draws conclusions with regard to the use of different modelling approaches and the possible importance to safety of various processes within and around a deep geological repository. In particular, the relative significance of model uncertainty and data variability is discussed

  2. Research on coding and decoding method for digital levels

    Energy Technology Data Exchange (ETDEWEB)

    Tu Lifen; Zhong Sidong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1mm when the measuring range is between 2m and 100m, which can meet practical needs.

  3. Research on coding and decoding method for digital levels.

    Science.gov (United States)

    Tu, Li-fen; Zhong, Si-dong

    2011-01-20

    A new coding and decoding method for digital levels is proposed. It is based on an area-array CCD sensor and adopts mixed coding technology. By taking advantage of redundant information in a digital image signal, the contradiction that the field of view and image resolution restrict each other in a digital level measurement is overcome, and the geodetic leveling becomes easier. The experimental results demonstrate that the uncertainty of measurement is 1 mm when the measuring range is between 2 m and 100 m, which can meet practical needs.

  4. Partial Safety Factors and Target Reliability Level in Danish Structural Codes

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Hansen, J. O.; Nielsen, T. A.

    2001-01-01

    The partial safety factors in the newly revised Danish structural codes have been derived using a reliability-based calibration. The calibrated partial safety factors result in the same average reliability level as in the previous codes, but a much more uniform reliability level has been obtained....... The paper describes the code format, the stochastic models and the resulting optimised partial safety factors....

  5. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1999-01-01

    We present general and unified algorithms for lossy/lossless coding of bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general...... to the specialized soft pattern matching techniques which work better for text. Template based refinement coding is applied for lossy-to-lossless refinement. Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless......, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG2, an emerging international standard for lossless/lossy compression of bi-level images....

  6. An Automatic Instruction-Level Parallelization of Machine Code

    Directory of Open Access Journals (Sweden)

    MARINKOVIC, V.

    2018-02-01

    Full Text Available Prevailing multicores and novel manycores have made a great challenge of modern day - parallelization of embedded software that is still written as sequential. In this paper, automatic code parallelization is considered, focusing on developing a parallelization tool at the binary level as well as on the validation of this approach. The novel instruction-level parallelization algorithm for assembly code which uses the register names after SSA to find independent blocks of code and then to schedule independent blocks using METIS to achieve good load balance is developed. The sequential consistency is verified and the validation is done by measuring the program execution time on the target architecture. Great speedup, taken as the performance measure in the validation process, and optimal load balancing are achieved for multicore RISC processors with 2 to 16 cores (e.g. MIPS, MicroBlaze, etc.. In particular, for 16 cores, the average speedup is 7.92x, while in some cases it reaches 14x. An approach to automatic parallelization provided by this paper is useful to researchers and developers in the area of parallelization as the basis for further optimizations, as the back-end of a compiler, or as the code parallelization tool for an embedded system.

  7. A directory of computer codes suitable for stress analysis of HLW containers - Compas project

    International Nuclear Information System (INIS)

    1989-01-01

    This document reports the work carried out for the Compas project which looked at the capabilities of various computer codes for the stress analysis of high-level nuclear-waste containers and overpacks. The report concentrates on codes used by the project partners, but also includes a number of the major commercial finite element codes. The report falls into two parts. The first part of the report describes the capabilities of the codes. This includes details of the solution methods used in the codes, the types of analysis which they can carry out and the interfacing with pre - and post - processing packages. This is the more comprehensive section of the report. The second part of the report looks at the performance of a selection of the codes (those used by the project partners). This look at how the codes perform in a number of test problems which require calculations typical of those encountered in the design and analysis of high-level waste containers and overpacks

  8. Coding Bootcamps : Building Future-Proof Skills through Rapid Skills Training

    OpenAIRE

    World Bank

    2017-01-01

    This report studies coding bootcamps. A new kind of rapid skills training program for the digital age. Coding bootcamps are typically short-term (three to six months), intensive and applied training courses provided by a third party that crowdsources the demand for low-skills tech talent. Coding bootcamps aim at low-entry level tech employability (for example, junior developer), providing a ...

  9. The impact of conventional dietary intake data coding methods on foods typically consumed by low-income African-American and White urban populations.

    Science.gov (United States)

    Mason, Marc A; Fanelli Kuczmarski, Marie; Allegro, Deanne; Zonderman, Alan B; Evans, Michele K

    2015-08-01

    Analysing dietary data to capture how individuals typically consume foods is dependent on the coding variables used. Individual foods consumed simultaneously, like coffee with milk, are given codes to identify these combinations. Our literature review revealed a lack of discussion about using combination codes in analysis. The present study identified foods consumed at mealtimes and by race when combination codes were or were not utilized. Duplicate analysis methods were performed on separate data sets. The original data set consisted of all foods reported; each food was coded as if it was consumed individually. The revised data set was derived from the original data set by first isolating coded foods consumed as individual items from those foods consumed simultaneously and assigning a code to designate a combination. Foods assigned a combination code, like pancakes with syrup, were aggregated and associated with a food group, defined by the major food component (i.e. pancakes), and then appended to the isolated coded foods. Healthy Aging in Neighborhoods of Diversity across the Life Span study. African-American and White adults with two dietary recalls (n 2177). Differences existed in lists of foods most frequently consumed by mealtime and race when comparing results based on original and revised data sets. African Americans reported consumption of sausage/luncheon meat and poultry, while ready-to-eat cereals and cakes/doughnuts/pastries were reported by Whites on recalls. Use of combination codes provided more accurate representation of how foods were consumed by populations. This information is beneficial when creating interventions and exploring diet-health relationships.

  10. pix2code: Generating Code from a Graphical User Interface Screenshot

    OpenAIRE

    Beltramelli, Tony

    2017-01-01

    Transforming a graphical user interface screenshot created by a designer into computer code is a typical task conducted by a developer in order to build customized software, websites, and mobile applications. In this paper, we show that deep learning methods can be leveraged to train a model end-to-end to automatically generate code from a single input image with over 77% of accuracy for three different platforms (i.e. iOS, Android and web-based technologies).

  11. QR-codes as a tool to increase physical activity level among school children during class hours

    DEFF Research Database (Denmark)

    Christensen, Jeanette Reffstrup; Kristensen, Allan; Bredahl, Thomas Viskum Gjelstrup

    the students physical activity level during class hours. Methods: A before-after study was used to examine 12 students physical activity level, measured with pedometers for six lessons. Three lessons of traditional teaching and three lessons, where QR-codes were used to make orienteering in school area...... as old fashioned. The students also felt positive about being physically active in teaching. Discussion and conclusion: QR-codes as a tool for teaching are usable for making students more physically active in teaching. The students were exited for using QR-codes and they experienced a good motivation......QR-codes as a tool to increase physical activity level among school children during class hours Introduction: Danish students are no longer fulfilling recommendations for everyday physical activity. Since August 2014, Danish students in public schools are therefore required to be physically active...

  12. Improvement of Level-1 PSA computer code package -A study for nuclear safety improvement-

    International Nuclear Information System (INIS)

    Park, Chang Kyu; Kim, Tae Woon; Ha, Jae Joo; Han, Sang Hoon; Cho, Yeong Kyun; Jeong, Won Dae; Jang, Seung Cheol; Choi, Young; Seong, Tae Yong; Kang, Dae Il; Hwang, Mi Jeong; Choi, Seon Yeong; An, Kwang Il

    1994-07-01

    This year is the second year of the Government-sponsored Mid- and Long-Term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The Improvement of Level-1 PSA Computer Codes' is divided into three main activities : (1) Methodology development on the under-developed fields such as risk assessment technology for plant shutdown and external events, (2) Computer code package development for Level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in the area of PSA methodology development, foreign PSA reports on shutdown and external events have been reviewed and various PSA methodologies have been compared. Level-1 PSA code KIRAP and CCF analysis code COCOA are converted from KOS to Windows. Human reliability database has been also established in this year. In the area of new technology applications, fuzzy set theory and entropy theory are used to estimate component life and to develop a new measure of uncertainty importance. Finally, in the field of application study of PSA technique to reactor regulation, a strategic study to develop a dynamic risk management tool PEPSI and the determination of inspection and test priority of motor operated valves based on risk importance worths have been studied. (Author)

  13. Development of a computer code for low-and intermediate-level radioactive waste disposal safety assessment

    International Nuclear Information System (INIS)

    Park, J. W.; Kim, C. L.; Lee, E. Y.; Lee, Y. M.; Kang, C. H.; Zhou, W.; Kozak, M. W.

    2002-01-01

    A safety assessment code, called SAGE (Safety Assessment Groundwater Evaluation), has been developed to describe post-closure radionuclide releases and potential radiological doses for low- and intermediate-level radioactive waste (LILW) disposal in an engineered vault facility in Korea. The conceptual model implemented in the code is focused on the release of radionuclide from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. The radionuclide transport equations are solved by spatially discretizing the disposal system into a series of compartments. Mass transfer between compartments is by diffusion/dispersion and advection. In all compartments, radionuclides are decayed either as a single-member chain or as multi-member chains. The biosphere is represented as a set of steady-state, radionuclide-specific pathway dose conversion factors that are multiplied by the appropriate release rate from the far field for each pathway. The code has the capability to treat input parameters either deterministically or probabilistically. Parameter input is achieved through a user-friendly Graphical User Interface. An application is presented, which is compared against safety assessment results from the other computer codes, to benchmark the reliability of system-level conceptual modeling of the code

  14. Selection of a computer code for Hanford low-level waste engineered-system performance assessment

    International Nuclear Information System (INIS)

    McGrail, B.P.; Mahoney, L.A.

    1995-10-01

    Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected to affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites

  15. A Tough Call : Mitigating Advanced Code-Reuse Attacks at the Binary Level

    NARCIS (Netherlands)

    Veen, Victor Van Der; Goktas, Enes; Contag, Moritz; Pawoloski, Andre; Chen, Xi; Rawat, Sanjay; Bos, Herbert; Holz, Thorsten; Athanasopoulos, Ilias; Giuffrida, Cristiano

    2016-01-01

    Current binary-level Control-Flow Integrity (CFI) techniques are weak in determining the set of valid targets for indirect control flow transfers on the forward edge. In particular, the lack of source code forces existing techniques to resort to a conservative address-taken policy that

  16. What to do with a Dead Research Code

    Science.gov (United States)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  17. Generating Safety-Critical PLC Code From a High-Level Application Software Specification

    Science.gov (United States)

    2008-01-01

    The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is

  18. Lossless, Near-Lossless, and Refinement Coding of Bi-level Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren Otto

    1997-01-01

    We present general and unified algorithms for lossy/lossless codingof bi-level images. The compression is realized by applying arithmetic coding to conditional probabilities. As in the current JBIG standard the conditioning may be specified by a template.For better compression, the more general....... Introducing only a small amount of loss in halftoned test images, compression is increased by up to a factor of four compared with JBIG. Lossy, lossless, and refinement decoding speed and lossless encoding speed are less than a factor of two slower than JBIG. The (de)coding method is proposed as part of JBIG......-2, an emerging international standard for lossless/lossy compression of bi-level images....

  19. Using machine-coded event data for the micro-level study of political violence

    Directory of Open Access Journals (Sweden)

    Jesse Hammond

    2014-07-01

    Full Text Available Machine-coded datasets likely represent the future of event data analysis. We assess the use of one of these datasets—Global Database of Events, Language and Tone (GDELT—for the micro-level study of political violence by comparing it to two hand-coded conflict event datasets. Our findings indicate that GDELT should be used with caution for geo-spatial analyses at the subnational level: its overall correlation with hand-coded data is mediocre, and at the local level major issues of geographic bias exist in how events are reported. Overall, our findings suggest that due to these issues, researchers studying local conflict processes may want to wait for a more reliable geocoding method before relying too heavily on this set of machine-coded data.

  20. POPCYCLE: a computer code for calculating nuclear and fossil plant levelized life-cycle power costs

    International Nuclear Information System (INIS)

    Hardie, R.W.

    1982-02-01

    POPCYCLE, a computer code designed to calculate levelized life-cycle power costs for nuclear and fossil electrical generating plants is described. Included are (1) derivations of the equations and a discussion of the methodology used by POPCYCLE, (2) a description of the input required by the code, (3) a listing of the input for a sample case, and (4) the output for a sample case

  1. BLOND, a building-level office environment dataset of typical electrical appliances

    Science.gov (United States)

    Kriechbaumer, Thomas; Jacobsen, Hans-Arno

    2018-03-01

    Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.

  2. BLOND, a building-level office environment dataset of typical electrical appliances.

    Science.gov (United States)

    Kriechbaumer, Thomas; Jacobsen, Hans-Arno

    2018-03-27

    Energy metering has gained popularity as conventional meters are replaced by electronic smart meters that promise energy savings and higher comfort levels for occupants. Achieving these goals requires a deeper understanding of consumption patterns to reduce the energy footprint: load profile forecasting, power disaggregation, appliance identification, startup event detection, etc. Publicly available datasets are used to test, verify, and benchmark possible solutions to these problems. For this purpose, we present the BLOND dataset: continuous energy measurements of a typical office environment at high sampling rates with common appliances and load profiles. We provide voltage and current readings for aggregated circuits and matching fully-labeled ground truth data (individual appliance measurements). The dataset contains 53 appliances (16 classes) in a 3-phase power grid. BLOND-50 contains 213 days of measurements sampled at 50kSps (aggregate) and 6.4kSps (individual appliances). BLOND-250 consists of the same setup: 50 days, 250kSps (aggregate), 50kSps (individual appliances). These are the longest continuous measurements at such high sampling rates and fully-labeled ground truth we are aware of.

  3. The management-retrieval code of nuclear level density sub-library (CENPL-NLD)

    International Nuclear Information System (INIS)

    Ge Zhigang; Su Zongdi; Huang Zhongfu; Dong Liaoyuan

    1995-01-01

    The management-retrieval code of the Nuclear Level Density (NLD) is presented. It contains two retrieval ways: single nucleus (SN) and neutron reaction (NR). The latter contains four kinds of retrieval types. This code not only can retrieve level density parameter and the data related to the level density, but also can calculate the relevant data by using different level density parameters and do comparison of the calculated results with related data in order to help user to select level density parameters

  4. Working research codes into fluid dynamics education: a science gateway approach

    Science.gov (United States)

    Mason, Lachlan; Hetherington, James; O'Reilly, Martin; Yong, May; Jersakova, Radka; Grieve, Stuart; Perez-Suarez, David; Klapaukh, Roman; Craster, Richard V.; Matar, Omar K.

    2017-11-01

    Research codes are effective for illustrating complex concepts in educational fluid dynamics courses, compared to textbook examples, an interactive three-dimensional visualisation can bring a problem to life! Various barriers, however, prevent the adoption of research codes in teaching: codes are typically created for highly-specific `once-off' calculations and, as such, have no user interface and a steep learning curve. Moreover, a code may require access to high-performance computing resources that are not readily available in the classroom. This project allows academics to rapidly work research codes into their teaching via a minimalist `science gateway' framework. The gateway is a simple, yet flexible, web interface allowing students to construct and run simulations, as well as view and share their output. Behind the scenes, the common operations of job configuration, submission, monitoring and post-processing are customisable at the level of shell scripting. In this talk, we demonstrate the creation of an example teaching gateway connected to the Code BLUE fluid dynamics software. Student simulations can be run via a third-party cloud computing provider or a local high-performance cluster. EPSRC, UK, MEMPHIS program Grant (EP/K003976/1), RAEng Research Chair (OKM).

  5. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    International Nuclear Information System (INIS)

    Takahashi, Tomoyuki; Takeda, Seiji; Kimura, Hideo

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  6. Improvement of level-1 PSA computer code package

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on `The improvement of level-1 PSA Computer Codes` is divided into two main activities : (1) improvement of level-1 PSA methodology, (2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs.

  7. Improvement of level-1 PSA computer code package

    International Nuclear Information System (INIS)

    Kim, Tae Woon; Park, C. K.; Kim, K. Y.; Han, S. H.; Jung, W. D.; Chang, S. C.; Yang, J. E.; Sung, T. Y.; Kang, D. I.; Park, J. H.; Lee, Y. H.; Kim, S. H.; Hwang, M. J.; Choi, S. Y.

    1997-07-01

    This year the fifth (final) year of the phase-I of the Government-sponsored Mid- and Long-term Nuclear Power Technology Development Project. The scope of this subproject titled on 'The improvement of level-1 PSA Computer Codes' is divided into two main activities : 1) improvement of level-1 PSA methodology, 2) development of applications methodology of PSA techniques to operations and maintenance of nuclear power plant. Level-1 PSA code KIRAP is converted to PC-Windows environment. For the improvement of efficiency in performing PSA, the fast cutset generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. Using about 30 foreign generic data sources, generic component reliability database (GDB) are developed considering dependency among source data. A computer program which handles dependency among data sources are also developed based on three stage bayesian updating technique. Common cause failure (CCF) analysis methods are reviewed and CCF database are established. Impact vectors can be estimated from this CCF database. A computer code, called MPRIDP, which handles CCF database are also developed. A CCF analysis reflecting plant-specific defensive strategy against CCF event is also performed. A risk monitor computer program, called Risk Monster, are being developed for the application to the operation and maintenance of nuclear power plant. The PSA application technique is applied to review the feasibility study of on-line maintenance and to the prioritization of in-service test (IST) of motor-operated valves (MOV). Finally, the root cause analysis (RCA) and reliability-centered maintenance (RCM) technologies are adopted and applied to the improvement of reliability of emergency diesel generators (EDG) of nuclear power plant. To help RCA and RCM analyses, two software programs are developed, which are EPIS and RAM Pro. (author). 129 refs., 20 tabs., 60 figs

  8. Confidentiality of 2D Code using Infrared with Cell-level Error Correction

    Directory of Open Access Journals (Sweden)

    Nobuyuki Teraura

    2013-03-01

    Full Text Available Optical information media printed on paper use printing materials to absorb visible light. There is a 2D code, which may be encrypted but also can possibly be copied. Hence, we envisage an information medium that cannot possibly be copied and thereby offers high security. At the surface, the normal 2D code is printed. The inner layers consist of 2D codes printed using a variety of materials, which absorb certain distinct wavelengths, to form a multilayered 2D code. Information can be distributed among the 2D codes forming the inner layers of the multiplex. Additionally, error correction at cell level can be introduced.

  9. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  10. Feature coding for image representation and recognition

    CERN Document Server

    Huang, Yongzhen

    2015-01-01

    This brief presents a comprehensive introduction to feature coding, which serves as a key module for the typical object recognition pipeline. The text offers a rich blend of theory and practice while reflects the recent developments on feature coding, covering the following five aspects: (1) Review the state-of-the-art, analyzing the motivations and mathematical representations of various feature coding methods; (2) Explore how various feature coding algorithms evolve along years; (3) Summarize the main characteristics of typical feature coding algorithms and categorize them accordingly; (4) D

  11. Confidence level in the calculations of HCDA consequences using large codes

    International Nuclear Information System (INIS)

    Nguyen, D.H.; Wilburn, N.P.

    1979-01-01

    The probabilistic approach to nuclear reactor safety is playing an increasingly significant role. For the liquid-metal fast breeder reactor (LMFBR) in particular, the ultimate application of this approach could be to determine the probability of achieving the goal of a specific line-of-assurance (LOA). Meanwhile a more pressing problem is one of quantifying the uncertainty in a calculated consequence for hypothetical core disruptive accident (HCDA) using large codes. Such uncertainty arises from imperfect modeling of phenomenology and/or from inaccuracy in input data. A method is presented to determine the confidence level in consequences calculated by a large computer code due to the known uncertainties in input invariables. A particular application was made to the initial time of pin failure in a transient overpower HCDA calculated by the code MELT-IIIA in order to demonstrate the method. A probability distribution function (pdf) for the time of failure was first constructed, then the confidence level for predicting this failure parameter within a desired range was determined

  12. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  13. Final technical position on documentation of computer codes for high-level waste management

    International Nuclear Information System (INIS)

    Silling, S.A.

    1983-06-01

    Guidance is given for the content of documentation of computer codes which are used in support of a license application for high-level waste disposal. The guidelines cover theoretical basis, programming, and instructions for use of the code

  14. Use of computer code for dose distribution studies in A 60CO industrial irradiator

    Science.gov (United States)

    Piña-Villalpando, G.; Sloan, D. P.

    1995-09-01

    This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).

  15. Performance of JPEG Image Transmission Using Proposed Asymmetric Turbo Code

    Directory of Open Access Journals (Sweden)

    Siddiqi Mohammad Umar

    2007-01-01

    Full Text Available This paper gives the results of a simulation study on the performance of JPEG image transmission over AWGN and Rayleigh fading channels using typical and proposed asymmetric turbo codes for error control coding. The baseline JPEG algorithm is used to compress a QCIF ( "Suzie" image. The recursive systematic convolutional (RSC encoder with generator polynomials , that is, (13/11 in decimal, and 3G interleaver are used for the typical WCDMA and CDMA2000 turbo codes. The proposed asymmetric turbo code uses generator polynomials , that is, (13/11; 13/9 in decimal, and a code-matched interleaver. The effect of interleaver in the proposed asymmetric turbo code is studied using weight distribution and simulation. The simulation results and performance bound for proposed asymmetric turbo code for the frame length , code rate with Log-MAP decoder over AWGN channel are compared with the typical system. From the simulation results, it is observed that the image transmission using proposed asymmetric turbo code performs better than that with the typical system.

  16. Benchmark calculation programme concerning typical LMFBR structures

    International Nuclear Information System (INIS)

    Donea, J.; Ferrari, G.; Grossetie, J.C.; Terzaghi, A.

    1982-01-01

    This programme, which is part of a comprehensive activity aimed at resolving difficulties encountered in using design procedures based on ASME Code Case N-47, should allow to get confidence in computer codes which are supposed to provide a realistic prediction of the LMFBR component behaviour. The calculations started on static analysis of typical structures made of non linear materials stressed by cyclic loads. The fluid structure interaction analysis is also being considered. Reasons and details of the different benchmark calculations are described, results obtained are commented and future computational exercise indicated

  17. Economic levels of thermal resistance for house envelopes: Considerations for a national energy code

    International Nuclear Information System (INIS)

    Swinton, M.C.; Sander, D.M.

    1992-01-01

    A code for energy efficiency in new buildings is being developed by the Standing Committee on Energy Conservation in Buildings. The precursor to the new code used national average energy rates and construction costs to determine economic optimum levels of insulation, and it is believed that this resulted in prescription of sub-optimum insulation levels in any region of Canada where energy or construction costs differ significantly from the average. A new approach for determining optimum levels of thermal insulation is proposed. The analytic techniques use month-by-month energy balances of heat loss and gain; use gain load ratio correlation (GLR) for predicting the fraction of useable free heat; increase confidence in the savings predictions for above grade envelopes; can take into account solar effects on windows; and are compatible with below-grade heat loss analysis techniques in use. A sensitivity analysis was performed to determine whether reasonable variations in house characteristics would cause significant differences in savings predicted. The life cycle costing technique developed will allow the selection of thermal resistances that are commonly met by industry. Environmental energy cost multipliers can be used with the proposed methodology, which could have a minor role in encouraging the next higher level of energy efficiency. 11 refs., 6 figs., 2 tabs

  18. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  19. FBCOT: a fast block coding option for JPEG 2000

    Science.gov (United States)

    Taubman, David; Naman, Aous; Mathew, Reji

    2017-09-01

    Based on the EBCOT algorithm, JPEG 2000 finds application in many fields, including high performance scientific, geospatial and video coding applications. Beyond digital cinema, JPEG 2000 is also attractive for low-latency video communications. The main obstacle for some of these applications is the relatively high computational complexity of the block coder, especially at high bit-rates. This paper proposes a drop-in replacement for the JPEG 2000 block coding algorithm, achieving much higher encoding and decoding throughputs, with only modest loss in coding efficiency (typically Coding with Optimized Truncation).

  20. A quantitative evaluation of seismic margin of typical sodium piping

    International Nuclear Information System (INIS)

    Morishita, Masaki

    1999-05-01

    It is widely recognized that the current seismic design methods for piping involve a large amount of safety margin. From this viewpoint, a series of seismic analyses and evaluations with various design codes were made on typical LMFBR main sodium piping systems. Actual capability against seismic loads were also estimated on the piping systems. Margins contained in the current codes were quantified based on these results, and potential benefits and impacts to the piping seismic design were assessed on possible mitigation of the current code allowables. From the study, the following points were clarified; 1) A combination of inelastic time history analysis and true (without margin)strength capability allows several to twenty times as large seismic load compared with the allowable load with the current methods. 2) The new rule of the ASME is relatively compatible with the results of inelastic analysis evaluation. Hence, this new rule might be a goal for the mitigation of seismic design rule. 3) With this mitigation, seismic design accommodation such as equipping with a large number of seismic supports may become unnecessary. (author)

  1. BIRTH: a beam deposition code for non-circular tokamak plasmas

    International Nuclear Information System (INIS)

    Otsuka, Michio; Nagami, Masayuki; Matsuda, Toshiaki

    1982-09-01

    A new beam deposition code has been developed which is capable of calculating fast ion deposition profiles including the orbit correction. The code incorporates any injection geometry and a non-circular cross section plasma with a variable elongation and an outward shift of the magnetic flux surface. Typical cpu time on a DEC-10 computer is 10 - 20 seconds and 5 - 10 seconds with and without the orbit correction, respectively. This is shorter by an order of magnitude than that of other codes, e.g., Monte Carlo codes. The power deposition profile calculated by this code is in good agreement with that calculated by a Monte Carlo code. (author)

  2. Advancing methods for reliably assessing motivational interviewing fidelity using the motivational interviewing skills code.

    Science.gov (United States)

    Lord, Sarah Peregrine; Can, Doğan; Yi, Michael; Marin, Rebeca; Dunn, Christopher W; Imel, Zac E; Georgiou, Panayiotis; Narayanan, Shrikanth; Steyvers, Mark; Atkins, David C

    2015-02-01

    The current paper presents novel methods for collecting MISC data and accurately assessing reliability of behavior codes at the level of the utterance. The MISC 2.1 was used to rate MI interviews from five randomized trials targeting alcohol and drug use. Sessions were coded at the utterance-level. Utterance-based coding reliability was estimated using three methods and compared to traditional reliability estimates of session tallies. Session-level reliability was generally higher compared to reliability using utterance-based codes, suggesting that typical methods for MISC reliability may be biased. These novel methods in MI fidelity data collection and reliability assessment provided rich data for therapist feedback and further analyses. Beyond implications for fidelity coding, utterance-level coding schemes may elucidate important elements in the counselor-client interaction that could inform theories of change and the practice of MI. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Development of SAGE, A computer code for safety assessment analyses for Korean Low-Level Radioactive Waste Disposal

    International Nuclear Information System (INIS)

    Zhou, W.; Kozak, Matthew W.; Park, Joowan; Kim, Changlak; Kang, Chulhyung

    2002-01-01

    This paper describes a computer code, called SAGE (Safety Assessment Groundwater Evaluation) to be used for evaluation of the concept for low-level waste disposal in the Republic of Korea (ROK). The conceptual model in the code is focused on releases from a gradually degrading engineered barrier system to an underlying unsaturated zone, thence to a saturated groundwater zone. Doses can be calculated for several biosphere systems including drinking contaminated groundwater, and subsequent contamination of foods, rivers, lakes, or the ocean by that groundwater. The flexibility of the code will permit both generic analyses in support of design and site development activities, and straightforward modification to permit site-specific and design-specific safety assessments of a real facility as progress is made toward implementation of a disposal site. In addition, the code has been written to easily interface with more detailed codes for specific parts of the safety assessment. In this way, the code's capabilities can be significantly expanded as needed. The code has the capability to treat input parameters either deterministic ally or probabilistic ally. Parameter input is achieved through a user-friendly Graphical User Interface.

  4. The development and application of a sub-channel code in ocean environment

    International Nuclear Information System (INIS)

    Wu, Pan; Shan, Jianqiang; Xiang, Xiong; Zhang, Bo; Gou, Junli; Zhang, Bin

    2016-01-01

    Highlights: • A sub-channel code named ATHAS/OE is developed for nuclear reactors in ocean environment. • ATHAS/OE is verified by another modified sub-channel code based on COBRA-IV. • ATHAS/OE is used to analyze thermal hydraulic of a typical SMR in heaving and rolling motion. • Calculation results show that ocean condition affect the thermal hydraulic of a reactor significantly. - Abstract: An upgraded version of ATHAS sub-channel code ATHAS/OE is developed for the investigation of the thermal hydraulic behavior of nuclear reactor core in ocean environment with consideration of heaving and rolling motion effect. The code is verified by another modified sub-channel code based on COBRA-IV and used to analyze the thermal hydraulic characteristics of a typical SMR under heaving and rolling motion condition. The calculation results show that the heaving and rolling motion affect the thermal hydraulic behavior of a reactor significantly.

  5. Aeroelastic Calculations Using CFD for a Typical Business Jet Model

    Science.gov (United States)

    Gibbons, Michael D.

    1996-01-01

    Two time-accurate Computational Fluid Dynamics (CFD) codes were used to compute several flutter points for a typical business jet model. The model consisted of a rigid fuselage with a flexible semispan wing and was tested in the Transonic Dynamics Tunnel at NASA Langley Research Center where experimental flutter data were obtained from M(sub infinity) = 0.628 to M(sub infinity) = 0.888. The computational results were computed using CFD codes based on the inviscid TSD equation (CAP-TSD) and the Euler/Navier-Stokes equations (CFL3D-AE). Comparisons are made between analytical results and with experiment where appropriate. The results presented here show that the Navier-Stokes method is required near the transonic dip due to the strong viscous effects while the TSD and Euler methods used here provide good results at the lower Mach numbers.

  6. Impact of the Level of State Tax Code Progressivity on Children's Health Outcomes

    Science.gov (United States)

    Granruth, Laura Brierton; Shields, Joseph J.

    2011-01-01

    This research study examines the impact of the level of state tax code progressivity on selected children's health outcomes. Specifically, it examines the degree to which a state's tax code ranking along the progressive-regressive continuum relates to percentage of low birthweight babies, infant and child mortality rates, and percentage of…

  7. Benchmarking NNWSI flow and transport codes: COVE 1 results

    International Nuclear Information System (INIS)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of the codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs

  8. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  9. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    Science.gov (United States)

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  10. Development of a detailed core flow analysis code for prismatic fuel reactors

    International Nuclear Information System (INIS)

    Bennett, R.G.

    1990-01-01

    The development of a computer code for the analysis of the detailed flow of helium in prismatic fuel reactors is reported. The code, called BYPASS, solves, a finite difference control volume formulation of the compressible, steady state fluid flow in highly cross-connected flow paths typical of the Modular High-Temperature Gas Cooled Reactor (MHTGR). The discretization of the flow in a core region typically considers the main coolant flow paths, the bypass gap flow paths, and the crossflow connections between them. 16 refs., 5 figs

  11. The Calculation of Flooding Level using CFX Code

    International Nuclear Information System (INIS)

    Oh, Seo Bin; Kim, Keon Yeop; Lee, Hyung Ho

    2015-01-01

    The plant design should consider internal flooding by postulated pipe ruptures, component failures, actuation of spray systems, and improper system alignment. The flooding causes failure of safety-related equipment and affects the integrity of the structure. The safety-related equipment should be installed above the flood level for protection against flooding effects. Conservative estimates of the flood level are important when a DBA occurs. The flooding level can be calculated simply applying Bernoulli's equation. However, in this study, a realistic calculation is performed with ANSYS CFX code. In calculation with CFX, air-core vortex phenomena, and turbulent flow can be simulated, which cannot be calculated analytically. The flooding level is evaluated by analytical calculation and CFX analysis for an assumed condition. The flood level is calculated as 0.71m and 1.1m analytically and with CFX simulation, respectively. Comparing the analytical calculation and simulation, they are similar, but the analytical calculation is not conservative. There are many factors reducing the drainage capacity such as air-core vortex, intake of air, and turbulent flow. Therefore, in case of flood level evaluation by analytical calculation, a sufficient safety margin should be considered

  12. A probabilistic assessment code system for derivation of clearance levels of radioactive materials. PASCLR user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Tomoyuki [Kyoto Univ., Kumatori, Osaka (Japan). Research Reactor Inst; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-01-01

    It is indicated that some types of radioactive material generating from the development and utilization of nuclear energy do not need to be subject regulatory control because they can only give rise to trivial radiation hazards. The process to remove such materials from regulatory control is called as 'clearance'. The corresponding levels of the concentration of radionuclides are called as 'clearance levels'. In the Nuclear Safety Commission's discussion, the deterministic approach was applied to derive the clearance levels, which are the concentrations of radionuclides in a cleared material equivalent to an individual dose criterion. Basically, realistic parameter values were selected for it. If the realistic values could not be defined, reasonably conservative values were selected. Additionally, the stochastic approaches were performed to validate the results which were obtained by the deterministic calculations. We have developed a computer code system PASCLR (Probabilistic Assessment code System for derivation of Clearance Levels of Radioactive materials) by using the Monte Carlo technique for carrying out the stochastic calculations. This report describes the structure and user information for execution of PASCLR code. (author)

  13. Determination of neutron energy spectrum at a pneumatic rabbit station of a typical swimming pool type material test research reactor

    International Nuclear Information System (INIS)

    Malkawi, S.R.; Ahmad, N.

    2002-01-01

    The method of multiple foil activation was used to measure the neutron energy spectrum, experimentally, at a rabbit station of Pakistan Research Reactor-1 (PARR-1), which is a typical swimming pool type material test research reactor. The computer codes MSITER and SANDBP were used to adjust the spectrum. The pre-information required by the adjustment codes was obtained by modelling the core and its surroundings in three-dimensions by using the one dimensional transport theory code WIMS-D/4 and the multidimensional finite difference diffusion theory code CITATION. The input spectrum covariance information required by MSITER code was also calculated from the CITATION output. A comparison between calculated and adjusted spectra shows a good agreement

  14. Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels

    Energy Technology Data Exchange (ETDEWEB)

    Alamino, Roberto C; Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)

    2007-10-12

    Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases.

  15. Statistical mechanics analysis of LDPC coding in MIMO Gaussian channels

    International Nuclear Information System (INIS)

    Alamino, Roberto C; Saad, David

    2007-01-01

    Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases

  16. A code for structural analysis of fuel assemblies

    International Nuclear Information System (INIS)

    Hayashi, I.M.V.; Perrotta, J.A.

    1988-08-01

    It's presented the code ELCOM for the matrix analysis of tubular structures coupled by rigid spacers, typical of PWR's fuel elements. The code ELCOM makes a static structural analysis, where the displacements and internal forces are obtained for each tubular structure at the joints with the spacers, and also, the natural frequencies and vibrational modes of an equilavent integrated structure are obtained. The ELCOM result is compared to a PWR fuel element structural analysis obtained in published paper. (author) [pt

  17. Level III Reliability methods feasible for complex structures

    NARCIS (Netherlands)

    Waarts, P.H.; Boer, A. de

    2001-01-01

    The paper describes the comparison between three types of reliability methods: code type level I used by a designer, full level I and a level III method. Two cases that are typical for civil engineering practise, a cable-stayed subjected to traffic load and the installation of a soil retaining sheet

  18. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  19. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  20. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    With the advent of sequential arithmetic coding, the focus of highly efficient lossless data compression is placed on modelling the data. Rissanen's Algorithm Context provided an elegant solution to universal coding with optimal convergence rate. Context based arithmetic coding laid the grounds f...

  1. FIRAC - a computer code to predict fire accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Foster, R.D.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire. A basic material transport capability that features the effects of convection, deposition, entrainment, and filtration of material is included. The interrelated effects of filter plugging, heat transfer, gas dynamics, and material transport are taken into account. In this paper the authors summarize the physical models used to describe the gas dynamics, material transport, and heat transfer processes. They also illustrate how a typical facility is modeled using the code

  2. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  3. LiTrack: A Fast Longitudinal Phase Space Tracking Code with Graphical User Interface

    International Nuclear Information System (INIS)

    Bane, K.L.F.

    2005-01-01

    Linac-based light sources and linear colliders typically apply longitudinal phase space manipulations in their design, including electron bunch compression and wakefield-induced energy spread control. Several computer codes handle such issues, but most also require detailed information on the transverse focusing lattice. In fact, in most linear accelerators, the transverse distributions do not significantly affect the longitudinal, and can be ignored initially. This allows the use of a fast 2D code to study longitudinal aspects without time-consuming considerations of the transverse focusing. LiTrack is based on a 15-year old code (same name) originally written by one of us (KB), which is now a Matlab [1] code with additional features, such as graphical user interface, prompt output plotting, and functional call within a script. This single-bunch tracking code includes RF acceleration, bunch compression to 3rd order, geometric and resistive short-range wakefields, aperture limits, synchrotron radiation, and flexible output plotting. The code was used to design both the LCLS [2] and the SPPS [3] projects at SLAC and typically runs 10 5 particles in < 1 minute. We describe the features, show some examples, and provide free access to the code

  4. LWR-WIMS, a computer code for light water reactor lattice calculations

    International Nuclear Information System (INIS)

    Halsall, M.J.

    1982-06-01

    LMR-WIMS is a comprehensive scheme of computation for studying the reactor physics aspects and burnup behaviour of typical lattices of light water reactors. This report describes the physics methods that have been incorporated in the code, and the modifications that have been made since the code was issued in 1972. (U.K.)

  5. HYDROCOIN [HYDROlogic COde INtercomparison] Level 1: Benchmarking and verification test results with CFEST [Coupled Fluid, Energy, and Solute Transport] code: Draft report

    International Nuclear Information System (INIS)

    Yabusaki, S.; Cole, C.; Monti, A.M.; Gupta, S.K.

    1987-04-01

    Part of the safety analysis is evaluating groundwater flow through the repository and the host rock to the accessible environment by developing mathematical or analytical models and numerical computer codes describing the flow mechanisms. This need led to the establishment of an international project called HYDROCOIN (HYDROlogic COde INtercomparison) organized by the Swedish Nuclear Power Inspectorate, a forum for discussing techniques and strategies in subsurface hydrologic modeling. The major objective of the present effort, HYDROCOIN Level 1, is determining the numerical accuracy of the computer codes. The definition of each case includes the input parameters, the governing equations, the output specifications, and the format. The Coupled Fluid, Energy, and Solute Transport (CFEST) code was applied to solve cases 1, 2, 4, 5, and 7; the Finite Element Three-Dimensional Groundwater (FE3DGW) Flow Model was used to solve case 6. Case 3 has been ignored because unsaturated flow is not pertinent to SRP. This report presents the Level 1 results furnished by the project teams. The numerical accuracy of the codes is determined by (1) comparing the computational results with analytical solutions for cases that have analytical solutions (namely cases 1 and 4), and (2) intercomparing results from codes for cases which do not have analytical solutions (cases 2, 5, 6, and 7). Cases 1, 2, 6, and 7 relate to flow analyses, whereas cases 4 and 5 require nonlinear solutions. 7 refs., 71 figs., 9 tabs

  6. Computer codes for level 1 probabilistic safety assessment

    International Nuclear Information System (INIS)

    1990-06-01

    Probabilistic Safety Assessment (PSA) entails several laborious tasks suitable for computer codes assistance. This guide identifies these tasks, presents guidelines for selecting and utilizing computer codes in the conduct of the PSA tasks and for the use of PSA results in safety management and provides information on available codes suggested or applied in performing PSA in nuclear power plants. The guidance is intended for use by nuclear power plant system engineers, safety and operating personnel, and regulators. Large efforts are made today to provide PC-based software systems and PSA processed information in a way to enable their use as a safety management tool by the nuclear power plant overall management. Guidelines on the characteristics of software needed for management to prepare a software that meets their specific needs are also provided. Most of these computer codes are also applicable for PSA of other industrial facilities. The scope of this document is limited to computer codes used for the treatment of internal events. It does not address other codes available mainly for the analysis of external events (e.g. seismic analysis) flood and fire analysis. Codes discussed in the document are those used for probabilistic rather than for phenomenological modelling. It should be also appreciated that these guidelines are not intended to lead the user to selection of one specific code. They provide simply criteria for the selection. Refs and tabs

  7. Code of a Tokamak Fusion Energy Facility ITER

    International Nuclear Information System (INIS)

    Yasuhide Asada; Kenzo Miya; Kazuhiko Hada; Eisuke Tada

    2002-01-01

    The technical structural code for ITER (International Thermonuclear Experimental Fusion Reactor) and, as more generic applications, for D-T burning fusion power facilities (hereafter, Fusion Code) should be innovative because of their quite different features of safety and mechanical components from nuclear fission reactors, and the necessity of introducing several new fabrication and examination technologies. Introduction of such newly developed technologies as inspection-free automatic welding into the Fusion Code is rationalized by a pilot application of a new code concept of s ystem-based code for integrity . The code concept means an integration of element technical items necessary for construction, operation and maintenance of mechanical components of fusion power facilities into a single system to attain an optimization of the total margin of these components. Unique and innovative items of the Fusion Code are typically as follows: - Use of non-metals; - Cryogenic application; - New design margins on allowable stresses, and other new design rules; - Use of inspection-free automatic welding, and other newly developed fabrication technologies; - Graded approach of quality assurance standard to cover radiological safety-system components as well as non-safety-system components; - Consideration on replacement components. (authors)

  8. Code Verification Capabilities and Assessments in Support of ASC V&V Level 2 Milestone #6035

    Energy Technology Data Exchange (ETDEWEB)

    Doebling, Scott William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budzien, Joanne Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Harwell, Megan Louise [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hickmann, Kyle Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Israel, Daniel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Magrogan, William Richard III [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Singleton, Jr., Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Srinivasan, Gowri [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Walter, Jr, John William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Woods, Charles Nathan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-26

    This document provides a summary of the code verification activities supporting the FY17 Level 2 V&V milestone entitled “Deliver a Capability for V&V Assessments of Code Implementations of Physics Models and Numerical Algorithms in Support of Future Predictive Capability Framework Pegposts.” The physics validation activities supporting this milestone are documented separately. The objectives of this portion of the milestone are: 1) Develop software tools to support code verification analysis; 2) Document standard definitions of code verification test problems; and 3) Perform code verification assessments (focusing on error behavior of algorithms). This report and a set of additional standalone documents serve as the compilation of results demonstrating accomplishment of these objectives.

  9. Analysis of results of AZTRAN and AZKIND codes for a BWR

    International Nuclear Information System (INIS)

    Bastida O, G. E.; Vallejo Q, J. A.; Galicia A, J.; Francois L, J. L.; Xolocostli M, J. V.; Rodriguez H, A.; Gomez T, A. M.

    2016-09-01

    This paper presents an analysis of results obtained from simulations performed with the neutron transport code AZTRAN and the kinetic code of neutron diffusion AZKIND, based on comparisons with models corresponding to a typical BWR, in order to verify the behavior and reliability of the values obtained with said code for its current development. For this, simulations of different geometries were made using validated nuclear codes, such as CASMO, MCNP5 and Serpent. The results obtained are considered adequate since they are comparable with those obtained and reported with other codes, based mainly on the neutron multiplication factor and the power distribution of the same. (Author)

  10. The PSACOIN level 1B exercise: A probabilistic code intercomparison involving a four compartment biosphere model

    International Nuclear Information System (INIS)

    Klos, R.A.; Sinclair, J.E.; Torres, C.; Mobbs, S.F.; Galson, D.A.

    1991-01-01

    The probabilistic Systems Assessment Code (PSAC) User Group of the OECD Nuclear Energy Agency has organised a series of code intercomparison studies of relevance to the performance assessment of underground repositories for radioactive wastes - known collectively by the name PSACOIN. The latest of these to be undertaken is designated PSACOIN Level 1b, and the case specification provides a complete assessment model of the behaviour of radionuclides following release into the biosphere. PSACOIN Level 1b differs from other biosphere oriented intercomparison exercises in that individual dose is the end point of the calculations as opposed to any other intermediate quantity. The PSACOIN Level 1b case specification describes a simple source term which is used to simulate the release of activity to the biosphere from certain types of near surface waste repository, the transport of radionuclides through the biosphere and their eventual uptake by humankind. The biosphere sub model comprises 4 compartments representing top and deep soil layers, river water and river sediment. The transport of radionuclides between the physical compartments is described by ten transfer coefficients and doses to humankind arise from the simultaneous consumption of water, fish, meat, milk, and grain as well as from dust inhalation and external γ-irradiation. The parameters of the exposure pathway sub model are chosen to be representative of an individual living in a small agrarian community. (13 refs., 3 figs., 2 tabs.)

  11. Mercure IV code application to the external dose computation from low and medium level wastes

    International Nuclear Information System (INIS)

    Tomassini, T.

    1985-01-01

    In the present work the external dose from low and medium level wastes is calculated using MERCURE IV code. The code utilizes MONTECARLO method for integrating multigroup line of sight attenuation Kernels

  12. Toddlers' categorization of typical and scrambled dolls and cars.

    Science.gov (United States)

    Heron, Michelle; Slaughter, Virginia

    2008-09-01

    Previous research has demonstrated discrimination of scrambled from typical human body shapes at 15-18 months of age [Slaughter, V., & Heron, M. (2004). Origins and early development of human body knowledge. Monographs of the Society for Research in Child Development, 69]. In the current study 18-, 24- and 30-month-old infants were presented with four typical and four scrambled dolls in a sequential touching procedure, to assess the development of explicit categorization of human body shapes. Infants were also presented with typical and scrambled cars, allowing comparison of infants' categorization of scrambled and typical exemplars in a different domain. Spontaneous comments regarding category membership were recorded. Girls categorized dolls and cars as typical or scrambled at 30 months, whereas boys only categorized the cars. Earliest categorization was for typical and scrambled cars, at 24 months, but only for boys. Language-based knowledge, coded from infants' comments, followed the same pattern. This suggests that human body knowledge does not have privileged status in infancy. Gender differences in performance are discussed.

  13. A Very Fast and Angular Momentum Conserving Tree Code

    International Nuclear Information System (INIS)

    Marcello, Dominic C.

    2017-01-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  14. A Very Fast and Angular Momentum Conserving Tree Code

    Energy Technology Data Exchange (ETDEWEB)

    Marcello, Dominic C., E-mail: dmarce504@gmail.com [Department of Physics and Astronomy, and Center for Computation and Technology Louisiana State University, Baton Rouge, LA 70803 (United States)

    2017-09-01

    There are many methods used to compute the classical gravitational field in astrophysical simulation codes. With the exception of the typically impractical method of direct computation, none ensure conservation of angular momentum to machine precision. Under uniform time-stepping, the Cartesian fast multipole method of Dehnen (also known as the very fast tree code) conserves linear momentum to machine precision. We show that it is possible to modify this method in a way that conserves both angular and linear momenta.

  15. A modified carrier-to-code leveling method for retrieving ionospheric observables and detecting short-term temporal variability of receiver differential code biases

    Science.gov (United States)

    Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Xiao; Li, Min

    2018-03-01

    Sensing the ionosphere with the global positioning system involves two sequential tasks, namely the ionospheric observable retrieval and the ionospheric parameter estimation. A prominent source of error has long been identified as short-term variability in receiver differential code bias (rDCB). We modify the carrier-to-code leveling (CCL), a method commonly used to accomplish the first task, through assuming rDCB to be unlinked in time. Aside from the ionospheric observables, which are affected by, among others, the rDCB at one reference epoch, the Modified CCL (MCCL) can also provide the rDCB offsets with respect to the reference epoch as by-products. Two consequences arise. First, MCCL is capable of excluding the effects of time-varying rDCB from the ionospheric observables, which, in turn, improves the quality of ionospheric parameters of interest. Second, MCCL has significant potential as a means to detect between-epoch fluctuations experienced by rDCB of a single receiver.

  16. Use of a hybrid code for global-scale plasma simulation

    International Nuclear Information System (INIS)

    Swift, D.W.

    1996-01-01

    This paper presents a demonstration of the use of a hybrid code to model the Earth's magnetosphere on a global scale. The typical hybrid code calculates the interaction of fully kinetic ions and a massless electron fluid with the magnetic field. This code also includes a fluid ion component to approximate the cold ionospheric plasma that spatially overlaps with the discrete particle component. Other innovative features of the code include a numerically generated curvilinear coordinate system and subcycling of the magnetic field update to the particle push. These innovations allow the code to accommodate disparate time and distance scales. The demonstration is a simulation of the noon meridian plane of the magnetosphere. The code exhibits the formation of fast and slow-mode shocks and tearing reconnection at the magnetopause. New results include particle acceleration in the cusp and nearly field aligned currents linking the cusp and polar ionosphere. The paper also describes a density depletion instability and measures to avoid it. 27 refs., 4 figs

  17. Automatic examination of nuclear reactor vessels with focused search units. Status and typical application to inspections performed in accordance with ASME code

    International Nuclear Information System (INIS)

    Verger, B.; Saglio, R.

    1981-05-01

    The use of focused search units in nuclear reactor vessel examinations has significantly increased the capability of flaw indication detection and characterization. These search units especially allow a more accurate sizing of indications and a more efficient follow up of their history. In this aspect, they are a unique tool in the area of safety and reliability of installations. It was this type of search unit which was adopted to perform the examinations required within the scope of inservice inspections of all P.W.R. reactors of the French nuclear program. This paper summarizes the results gathered through the 4l examinations performed over the last five years. A typical application of focused search units in automated inspections performed in accordance with ASME code requirements on P.W.R. nuclear reactor vessels is then described

  18. L-type calcium channels refine the neural population code of sound level

    Science.gov (United States)

    Grimsley, Calum Alex; Green, David Brian

    2016-01-01

    The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level. PMID:27605536

  19. Code Calibration as a Decision Problem

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1993-01-01

    Calibration of partial coefficients for a class of structures where no code exists is considered. The partial coefficients are determined such that the difference between the reliability for the different structures in the class considered and a target reliability level is minimized. Code...... calibration on a decision theoretical basis is discussed. Results from code calibration for rubble mound breakwater designs are shown....

  20. Squares of Random Linear Codes

    DEFF Research Database (Denmark)

    Cascudo Pueyo, Ignacio; Cramer, Ronald; Mirandola, Diego

    2015-01-01

    a positive answer, for codes of dimension $k$ and length roughly $\\frac{1}{2}k^2$ or smaller. Moreover, the convergence speed is exponential if the difference $k(k+1)/2-n$ is at least linear in $k$. The proof uses random coding and combinatorial arguments, together with algebraic tools involving the precise......Given a linear code $C$, one can define the $d$-th power of $C$ as the span of all componentwise products of $d$ elements of $C$. A power of $C$ may quickly fill the whole space. Our purpose is to answer the following question: does the square of a code ``typically'' fill the whole space? We give...

  1. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  2. Specification of a test problem for HYDROCOIN [Hydrologic Code Intercomparison] Level 3 Case 2: Sensitivity analysis for deep disposal in partially saturated, fractured tuff

    International Nuclear Information System (INIS)

    Prindle, R.W.

    1987-08-01

    The international Hydrologic Code Intercomparison Project (HYDROCOIN) was formed to evaluate hydrogeologic models and computer codes and their use in performance assessment for high-level radioactive waste repositories. Three principal activities in the HYDROCOIN Project are Level 1, verification and benchmarking of hydrologic codes; Level 2, validation of hydrologic models; and Level 3, sensitivity and uncertainty analyses of the models and codes. This report presents a test case defined for the HYDROCOIN Level 3 activity to explore the feasibility of applying various sensitivity-analysis methodologies to a highly nonlinear model of isothermal, partially saturated flow through fractured tuff, and to develop modeling approaches to implement the methodologies for sensitivity analysis. These analyses involve an idealized representation of a repository sited above the water table in a layered sequence of welded and nonwelded, fractured, volcanic tuffs. The analyses suggested here include one-dimensional, steady flow; one-dimensional, nonsteady flow; and two-dimensional, steady flow. Performance measures to be used to evaluate model sensitivities are also defined; the measures are related to regulatory criteria for containment of high-level radioactive waste. 14 refs., 5 figs., 4 tabs

  3. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  4. Spatial Resolution of the ECE for JET Typical Parameters

    International Nuclear Information System (INIS)

    Tribaldos, V.

    2000-01-01

    The purpose of this report is to obtain estimations of the spatial resolution of the electron cyclotron emission (ECE) phenomena for the typical plasmas found in JET tokamak. The analysis of the spatial resolution of the ECE is based on the underlying physical process of emission and a working definition is presented and discussed. In making these estimations a typical JET pulse is being analysed taking into account the magnetic configuration, the density and temperature profiles, obtained with the EFIT code and from the LIDAR diagnostic. Ray tracing simulations are performed for a Maxwellian plasma taking into account the antenna pattern. (Author) 5 refs

  5. Sensitivity analysis of a low-level waste environmental transport code

    International Nuclear Information System (INIS)

    Hiromoto, G.

    1989-01-01

    Results are presented from a sensivity analysis of a computer code designed to simulate the environmental transport of radionuclides buried at shallow land waste repositories. A sensitivity analysis methodology, based on the surface response replacement and statistic sensitivity estimators, was developed to address the relative importance of the input parameters on the model output. Response surface replacement for the model was constructed by stepwise regression, after sampling input vectors from range and distribution of the input variables, and running the code to generate the associated output data. Sensitivity estimators were compute using the partial rank correlation coefficients and the standardized rank regression coefficients. The results showed that the tecniques employed in this work provides a feasible means to perform a sensitivity analysis of a general not-linear environmental radionuclides transport models. (author) [pt

  6. The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children

    Science.gov (United States)

    Ameel, Eef; Storms, Gert

    2016-01-01

    An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371

  7. CFRX, a one-and-a-quarter-dimensional transport code for field-reversed configuration studies

    International Nuclear Information System (INIS)

    Hsiao Mingyuan

    1989-01-01

    A one-and-a-quarter-dimensional transport code, which includes radial as well as some two-dimensional effects for field-reversed configurations, is described. The set of transport equations is transformed to a set of new independent and dependent variables and is solved as a coupled initial-boundary value problem. The code simulation includes both the closed and open field regions. The axial effects incorporated include global axial force balance, axial losses in the open field region, and flux surface averaging over the closed field region. A typical example of the code results is also given. (orig.)

  8. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  9. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  10. An improvement of estimation method of source term to the environment for interfacing system LOCA for typical PWR using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Ahn, Kwang Il [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    Interfacing-system loss-of-coolant-accident (ISLOCA) has been identified as the most hazardous accident scenario in the typical PWR plants. The present study as an effort to improve the knowledge of the source term to the environment during ISLOCA focuses on an improvement of the estimation method. The improvement was performed to take into account an effect of broken pipeline and auxiliary building structures relevant to ISLOCA. An estimation of the source term to the environment was for the OPR-1000 plants by MELOCR code version 1.8.6. The key features of the source term showed that the massive amount of fission products departed from the beginning of core degradation to the vessel breach. The release amount of fission products may be affected by the broken pipeline and the auxiliary building structure associated with release pathway.

  11. Challenges to code status discussions for pediatric patients.

    Directory of Open Access Journals (Sweden)

    Katherine E Kruse

    Full Text Available In the context of serious or life-limiting illness, pediatric patients and their families are faced with difficult decisions surrounding appropriate resuscitation efforts in the event of a cardiopulmonary arrest. Code status orders are one way to inform end-of-life medical decision making. The objectives of this study are to evaluate the extent to which pediatric providers have knowledge of code status options and explore the association of provider role with (1 knowledge of code status options, (2 perception of timing of code status discussions, (3 perception of family receptivity to code status discussions, and (4 comfort carrying out code status discussions.Nurses, trainees (residents and fellows, and attending physicians from pediatric units where code status discussions typically occur completed a short survey questionnaire regarding their knowledge of code status options and perceptions surrounding code status discussions.Single center, quaternary care children's hospital.203 nurses, 31 trainees, and 29 attending physicians in 4 high-acuity pediatric units responded to the survey (N = 263, 90% response rate. Based on an objective knowledge measure, providers demonstrate poor understanding of available code status options, with only 22% of providers able to enumerate more than two of four available code status options. In contrast, provider groups self-report high levels of familiarity with available code status options, with attending physicians reporting significantly higher levels than nurses and trainees (p = 0.0125. Nurses and attending physicians show significantly different perception of code status discussion timing, with majority of nurses (63.4% perceiving discussions as occurring "too late" or "much too late" and majority of attending physicians (55.6% perceiving the timing as "about right" (p<0.0001. Attending physicians report significantly higher comfort having code status discussions with families than do nurses or trainees

  12. Power feedback effects in the LEM code

    International Nuclear Information System (INIS)

    Kromar, M.

    1999-01-01

    The nodal diffusion code LEM has been extended with the power feedback option. Thermohydraulic and neutronic coupling is covered with the Reactivity Coefficient Method. Presented are results of the code testing. Verification is done on the typical non-uprated NPP Krsko reload cycles. Results show that the code fulfill objectives arising in the process of reactor core analysis.(author)

  13. Solutions to HYDROCOIN [Hydrologic Code Intercomparison] Level 1 problems using STOKES and PARTICLE (Cases 1,2,4,7)

    International Nuclear Information System (INIS)

    Gureghian, A.B.; Andrews, A.; Steidl, S.B.; Brandstetter, A.

    1987-10-01

    HYDROCOIN (Hydrologic Code Intercomparison) Level 1 benchmark problems are solved using the finite element ground-water flow code STOKES and the pathline generating code PARTICLE developed for the Office of Crystalline Repository Development (OCRD). The objective of the Level 1 benchmark problems is to verify the numerical accuracy of ground-water flow codes by intercomparison of their results with analytical solutions and other numerical computer codes. Seven test cases were proposed for Level 1 to the Swedish Nuclear Power Inspectorate, the managing participant of HYDROCOIN. Cases 1, 2, 4, and 7 were selected by OCRD because of their appropriateness to the nature of crystalline repository hydrologic performance. The background relevance, conceptual model, and assumptions of each case are presented. The governing equations, boundary conditions, input parameters, and the solution schemes applied to each case are discussed. The results are shown in graphic and tabular form with concluding remarks. The results demonstrate the two-dimensional verification of STOKES and PARTICLE. 5 refs., 61 figs., 30 tabs

  14. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  15. Clean Code - Why you should care

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    - Martin Fowler Writing code is communication, not solely with the computer that executes it, but also with other developers and with oneself. A developer spends a lot of his working time reading and understanding code that was written by other developers or by himself in the past. The readability of the code plays an important factor for the time to find a bug or add new functionality, which in turn has a big impact on the productivity. Code that is difficult to undestand, hard to maintain and refactor, and offers many spots for bugs to hide is not considered to be "clean code". But what could considered as "clean code" and what are the advantages of a strict application of its guidelines? In this presentation we will take a look on some typical "code smells" and proposed guidelines to improve your coding skills to write cleaner code that is less bug prone and better to maintain.

  16. Lung volumes: measurement, clinical use, and coding.

    Science.gov (United States)

    Flesch, Judd D; Dine, C Jessica

    2012-08-01

    Measurement of lung volumes is an integral part of complete pulmonary function testing. Some lung volumes can be measured during spirometry; however, measurement of the residual volume (RV), functional residual capacity (FRC), and total lung capacity (TLC) requires special techniques. FRC is typically measured by one of three methods. Body plethysmography uses Boyle's Law to determine lung volumes, whereas inert gas dilution and nitrogen washout use dilution properties of gases. After determination of FRC, expiratory reserve volume and inspiratory vital capacity are measured, which allows the calculation of the RV and TLC. Lung volumes are commonly used for the diagnosis of restriction. In obstructive lung disease, they are used to assess for hyperinflation. Changes in lung volumes can also be seen in a number of other clinical conditions. Reimbursement for measurement of lung volumes requires knowledge of current procedural terminology (CPT) codes, relevant indications, and an appropriate level of physician supervision. Because of recent efforts to eliminate payment inefficiencies, the 10 previous CPT codes for lung volumes, airway resistance, and diffusing capacity have been bundled into four new CPT codes.

  17. CESARR V.2 manual: Computer code for the evaluation of surface storage of low and medium level radioactive waste

    International Nuclear Information System (INIS)

    Moya Rivera, J.A.; Bolado Lavin, R.

    1997-01-01

    CESARR (Code for the safety evaluation of low and medium level radioactive waste storage). This code was developed for the safety probabilistic evaluations in the facilities of low-and medium level radioactive waste storage

  18. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  19. Cold-Leg Small Break LOCA Analysis of APR1400 Plant Using a SPACE/sEM Code

    International Nuclear Information System (INIS)

    Lim, Sang Gyu; Lee, Suk Ho; Yu, Keuk Jong; Kim, Han Gon; Lee, Jae Yong

    2013-01-01

    The Small Break Loss-of-Coolant Accident (SBLOCA) evaluation methodology (EM) for APR1400, called sEM, is now being developed using SPACE code. SPACE/sEM is to set up a conservative evaluation methodology in accordance with appendix K of 10 CFR 50. Major required and acceptable features of the evaluation models are described as below. - Fission product decay : 1.2 times of ANS97 decay curve - Critical flow model : Henry-Fauske Moody two phase critical flow model - Metal-Water reaction model : Baker-Just equation - Critical Heat Flux (CHF) : B and W, Barnett and Modified Barnett correlation - Post-CHF : Groeneveld 5.7 film boiling correlation A series of test matrix is established to validate SPACE/sEM code in terms of major SBLOCA phenomena, e.g. core level swelling and boiling, core heat transfer, critical flow, loop seal clearance and their integrated effects. The separated effect tests (SETs) and integrated effect tests (IETs) are successfully performed and these results shows that SPACE/sEM code has a conservatism comparing with experimental data. Finally, plant calculations of SBLOCA for APR1400 are conducted as described below. - Break location sensitivity : DVI line, hot-leg, cold-leg, pump suction leg. - Break size spectrum : 0.4ft 2 ∼0.02ft 2 (DVI) 0.5ft 2 ∼0.02ft 2 (hot-leg, cold-leg, pump suction leg) This paper deals with break size spectrum analysis of cold-leg break accidents. Based on the calculation results, emergency core cooling system (ECCS) performances of APR1400 and typical SBLOCA phenomena can be evaluated. Cold-leg SBLOCA analysis for APR1400 is performed using SPACE/sEM code under harsh environment condition. SPACE/sEM code shows the typical SBLOCA behaviors and it is reasonably predicted. Although SPACE/sEM code has conservative models and correlations based on appendix K of 10 CFR 50, PCT does not exceed the requirement (1477 K). It is concluded that ECCS in APR1400 has a sufficient performance in cold-leg SBLOCA

  20. Cold-Leg Small Break LOCA Analysis of APR1400 Plant Using a SPACE/sEM Code

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Sang Gyu; Lee, Suk Ho; Yu, Keuk Jong; Kim, Han Gon; Lee, Jae Yong [Central Research Institute, KHNP, Ltd., Daejeon (Korea, Republic of)

    2013-10-15

    The Small Break Loss-of-Coolant Accident (SBLOCA) evaluation methodology (EM) for APR1400, called sEM, is now being developed using SPACE code. SPACE/sEM is to set up a conservative evaluation methodology in accordance with appendix K of 10 CFR 50. Major required and acceptable features of the evaluation models are described as below. - Fission product decay : 1.2 times of ANS97 decay curve - Critical flow model : Henry-Fauske Moody two phase critical flow model - Metal-Water reaction model : Baker-Just equation - Critical Heat Flux (CHF) : B and W, Barnett and Modified Barnett correlation - Post-CHF : Groeneveld 5.7 film boiling correlation A series of test matrix is established to validate SPACE/sEM code in terms of major SBLOCA phenomena, e.g. core level swelling and boiling, core heat transfer, critical flow, loop seal clearance and their integrated effects. The separated effect tests (SETs) and integrated effect tests (IETs) are successfully performed and these results shows that SPACE/sEM code has a conservatism comparing with experimental data. Finally, plant calculations of SBLOCA for APR1400 are conducted as described below. - Break location sensitivity : DVI line, hot-leg, cold-leg, pump suction leg. - Break size spectrum : 0.4ft{sup 2}∼0.02ft{sup 2}(DVI) 0.5ft{sup 2}∼0.02ft{sup 2}(hot-leg, cold-leg, pump suction leg) This paper deals with break size spectrum analysis of cold-leg break accidents. Based on the calculation results, emergency core cooling system (ECCS) performances of APR1400 and typical SBLOCA phenomena can be evaluated. Cold-leg SBLOCA analysis for APR1400 is performed using SPACE/sEM code under harsh environment condition. SPACE/sEM code shows the typical SBLOCA behaviors and it is reasonably predicted. Although SPACE/sEM code has conservative models and correlations based on appendix K of 10 CFR 50, PCT does not exceed the requirement (1477 K). It is concluded that ECCS in APR1400 has a sufficient performance in cold-leg SBLOCA.

  1. A Denotational Semantics for Communicating Unstructured Code

    Directory of Open Access Journals (Sweden)

    Nils Jähnig

    2015-03-01

    Full Text Available An important property of programming language semantics is that they should be compositional. However, unstructured low-level code contains goto-like commands making it hard to define a semantics that is compositional. In this paper, we follow the ideas of Saabas and Uustalu to structure low-level code. This gives us the possibility to define a compositional denotational semantics based on least fixed points to allow for the use of inductive verification methods. We capture the semantics of communication using finite traces similar to the denotations of CSP. In addition, we examine properties of this semantics and give an example that demonstrates reasoning about communication and jumps. With this semantics, we lay the foundations for a proof calculus that captures both, the semantics of unstructured low-level code and communication.

  2. An overview of the geochemical code MINTEQ: Applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Peterson, S.R.; Opitz, B.E.; Graham, M.J.; Eary, L.E.

    1987-03-01

    The MINTEQ geochemical computer code, developed at the Pacific Northwest Laboratory (PNL), integrates many of the capabilities of its two immediate predecessors, MINEQL and WATEQ3. The MINTEQ code will be used in the Special Waste Form Lysimeters-Arid program to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments of the interaction of ground water with solidified low-level wastes. The code can calculate ion speciation/solubilitya, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial, solidified low-level wastes. The wastes being evaluated include power-reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code was upgraded preparatory to performing the geochemical modeling. Thermodynamic data for solid phases and aqueous species containing Sb, Ce, Cs, or Co were added to the MINTEQ database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the waste forms predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partial field validation of the geochemical model

  3. Anxiety Disorders in Typically Developing Youth: Autism Spectrum Symptoms as a Predictor of Cognitive-Behavioral Treatment

    Science.gov (United States)

    Puleo, Connor M.; Kendall, Philip C.

    2011-01-01

    Symptoms of autism spectrum disorder (ASD) were assessed (Social Responsiveness Scale-Parent (SRS-P); coded in-session behavior) in typically-developing, anxiety-disordered children (N = 50) treated with cognitive-behavioral therapy (CBT). "Study 1": children with moderate autistic symptomology (per SRS-P) were significantly more likely to improve…

  4. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  5. Composite Extension Finite Fields for Low Overhead Network Coding

    DEFF Research Database (Denmark)

    Heide, Janus; Roetter, Daniel Enrique Lucani

    2015-01-01

    Although Network Coding (NC) has been proven to increase throughput and reliability in communication networks, its adoption is typically hindered by the additional complexity it introduces at various nodes in the network and the overhead to signal the coding coefficients associated with each code...

  6. User's manual for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the User's Manual for the TMAD code system, which includes the TMAD code and the LIBMAKR code. The TMAD code was commissioned to make it easier to interpret moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, also can be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code then will interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data

  7. Applications of American design codes for elevated temperature environment

    International Nuclear Information System (INIS)

    Severud, L.K.

    1980-03-01

    A brief summary of the ASME Code rules of Case N-47 is presented. An overview of the typical procedure used to demonstrate Code compliance is provided. Application experience and some examples of detailed inelastic analysis and simplified-approximate methods are given. Recent developments and future trends in design criteria and ASME Code rules are also presented

  8. Underworld - Bringing a Research Code to the Classroom

    Science.gov (United States)

    Moresi, L. N.; Mansour, J.; Giordani, J.; Farrington, R.; Kaluza, O.; Quenette, S.; Woodcock, R.; Squire, G.

    2017-12-01

    While there are many reasons to celebrate the passing of punch card programming and flickering green screens,the loss of the sense of wonder at the very existence of computers and the calculations they make possible shouldnot be numbered among them. Computers have become so familiar that students are often unaware that formal and careful design of algorithms andtheir implementations remains a valuable and important skill that has to be learned and practiced to achieveexpertise and genuine understanding. In teaching geodynamics and geophysics at undergraduate level, we aimed to be able to bring our researchtools into the classroom - even when those tools are advanced, parallel research codes that we typically deploy on hundredsor thousands of processors, and we wanted to teach not just the physical concepts that are modelled by these codes but asense of familiarity with computational modelling and the ability to discriminate a reliable model from a poor one. The underworld code (www.underworldcode.org) was developed for modelling plate-scale fluid mechanics and studyingproblems in lithosphere dynamics. Though specialised for this task, underworld has a straightforwardpython user interface that allows it to run within the environment of jupyter notebooks on a laptop (at modest resolution, of course).The python interface was developed for adaptability in addressing new research problems, but also lends itself to integration intoa python-driven learning environment. To manage the heavy demands of installing and running underworld in a teaching laboratory, we have developed a workflow in whichwe install docker containers in the cloud which support a number of students to run their own environment independently. We share ourexperience blending notebooks and static webpages into a single web environment, and we explain how we designed our graphics andanalysis tools to allow notebook "scripts" to be queued and run on a supercomputer.

  9. An introduction to LIME 1.0 and its use in coupling codes for multiphysics simulations.

    Energy Technology Data Exchange (ETDEWEB)

    Belcourt, Noel; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-11-01

    LIME is a small software package for creating multiphysics simulation codes. The name was formed as an acronym denoting 'Lightweight Integrating Multiphysics Environment for coupling codes.' LIME is intended to be especially useful when separate computer codes (which may be written in any standard computer language) already exist to solve different parts of a multiphysics problem. LIME provides the key high-level software (written in C++), a well defined approach (with example templates), and interface requirements to enable the assembly of multiple physics codes into a single coupled-multiphysics simulation code. In this report we introduce important software design characteristics of LIME, describe key components of a typical multiphysics application that might be created using LIME, and provide basic examples of its use - including the customized software that must be written by a user. We also describe the types of modifications that may be needed to individual physics codes in order for them to be incorporated into a LIME-based multiphysics application.

  10. Detailed resonance absorption calculations with the Monte Carlo code MCNP and collision probability version of the slowing down code ROLAIDS

    International Nuclear Information System (INIS)

    Kruijf, W.J.M. de; Janssen, A.J.

    1994-01-01

    Very accurate Mote Carlo calculations with Monte Carlo Code have been performed to serve as reference for benchmark calculations on resonance absorption by U 238 in a typical PWR pin-cell geometry. Calculations with the energy-pointwise slowing down code calculates the resonance absorption accurately. Calculations with the multigroup discrete ordinates code XSDRN show that accurate results can only be achieved with a very fine energy mesh. (authors). 9 refs., 5 figs., 2 tabs

  11. DATA-POOL : a direct-access data base for large-scale nuclear codes

    International Nuclear Information System (INIS)

    Yamano, Naoki; Koyama, Kinji; Naito, Yoshitaka; Minami, Kazuyoshi.

    1991-12-01

    A direct-access data base DATA-POOL has been developed for large-scale nuclear codes. The data can be stored and retrieved with specifications of simple node names, by using the DATA-POOL access package written in the FORTRAN 77 language. A management utility POOL for the DATA-POOL is also provided. A typical application of the DATA-POOL is shown to the RADHEAT-V4 code system developed for performing safety analyses of radiation shielding. Many samples and error messages are also noted to apply the DATA-POOL for the other code systems. This report is provided for a manual of DATA-POOL. (author)

  12. Adaptable Value-Set Analysis for Low-Level Code

    OpenAIRE

    Brauer, Jörg; Hansen, René Rydhof; Kowalewski, Stefan; Larsen, Kim G.; Olesen, Mads Chr.

    2012-01-01

    This paper presents a framework for binary code analysis that uses only SAT-based algorithms. Within the framework, incremental SAT solving is used to perform a form of weakly relational value-set analysis in a novel way, connecting the expressiveness of the value sets to computational complexity. Another key feature of our framework is that it translates the semantics of binary code into an intermediate representation. This allows for a straightforward translation of the program semantics in...

  13. On the Feasibility of a Network Coded Mobile Storage Cloud

    DEFF Research Database (Denmark)

    Sipos, Marton A.; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2015-01-01

    Conventional cloud storage services offer relatively good reliability and performance in a cost-effective manner. However, they are typically structured in a centralized and highly controlled fashion. In more dynamic storage scenarios, these centralized approaches are unfeasible and developing...... to provide an effective and flexible erasure correcting code. This paper identifies and answers key questions regarding the feasibility of such a system. We show that the mobile cloud has sufficient network resources to adapt to changes in node numbers and also study the redundancy level needed to maintain...... data availability. We have found that as little as 75% redundancy is enough to offer 99.28% availability for the examined period and essentially 100% availability is achieved when using 50% redundancy along with high-availability nodes. We have leveraged traces from a popular P2P mobile application...

  14. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  15. Low-level waste shallow burial assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Little, C.A.; Emerson, C.J.

    1981-01-01

    PRESTO (Prediction of Radiation Exposures from Shallow Trench Operationns) is a computer code developed under United States Environmental Protection Agency funding to evaluate possible health effects from radionuclide releases from shallow, radioctive-waste disposal trenches and from areas contaminated with operational spillage. The model is intended to predict radionuclide transport and the ensuing exposure and health impact to a stable, local population for a 1000-year period following closure of the burial grounds. Several classes of submodels are used in PRESTO to represent scheduled events, unit system responses, and risk evaluation processes. The code is modular to permit future expansion and refinement. Near-surface transport mechanisms considered in the PRESTO code are cap failure, cap erosion, farming or reclamation practices, human intrusion, chemical exchange within an active surface soil layer, contamination from trench overflow, and dilution by surface streams. Subsurface processes include infiltration and drainage into the trench, the ensuing solubilization of radionuclides, and chemical exchange between trench water and buried solids. Mechanisms leading to contaminated outflow include trench overflow and downwad vertical percolation. If the latter outflow reaches an aquifer, radiological exposure from irrigation or domestic consumption is considered. Airborne exposure terms are evaluated using the Gaussian plume atmospheric transport formulation as implemented by Fields and Miller

  16. Enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding for four-level holographic data storage systems

    Science.gov (United States)

    Kong, Gyuyeol; Choi, Sooyong

    2017-09-01

    An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.

  17. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  18. A one-and-a-quarter-dimensional transport code for field-reversed configuration studies: A user's guide for CFRX

    International Nuclear Information System (INIS)

    Hsiao, Ming-Yuan; Werley, K.A.; Ling, Kuok-Mee.

    1988-05-01

    A one-and-a-quarter-dimensional transport code, which includes radial as well as some two-dimensional effects for field-reversed configurations, is described. The set of transport equations is transformed to a set of new independent and dependent variables and is solved as a coupled initial-boundary value problem. The code simulation includes both the closed and open field regions. The axial effects incorporated include global axial force balance, axial losses in the open field region, and flux surface averaging over the closed field region. Input, output, and structure of the code are described in detail. A typical example of the code results is also given. 20 refs., 21 figs., 7 tabs

  19. A PC version of the Monte Carlo criticality code OMEGA

    International Nuclear Information System (INIS)

    Seifert, E.

    1996-05-01

    A description of the PC version of the Monte Carlo criticality code OMEGA is given. The report contains a general description of the code together with a detailed input description. Furthermore, some examples are given illustrating the generation of an input file. The main field of application is the calculation of the criticality of arrangements of fissionable material. Geometrically complicated arrangements that often appear inside and outside a reactor, e.g. in a fuel storage or transport container, can be considered essentially without geometrical approximations. For example, the real geometry of assemblies containing hexagonal or square lattice structures can be described in full detail. Moreover, the code can be used for special investigations in the field of reactor physics and neutron transport. Many years of practical experience and comparison with reference cases have shown that the code together with the built-in data libraries gives reliable results. OMEGA is completely independent on other widely used criticality codes (KENO, MCNP, etc.), concerning programming and the data base. It is a good practice to run difficult criticality safety problems by different independent codes in order to mutually verify the results. In this way, OMEGA can be used as a redundant code within the family of criticality codes. An advantage of OMEGA is the short calculation time: A typical criticality safety application takes only a few minutes on a Pentium PC. Therefore, the influence of parameter variations can simply be investigated by running many variants of a problem. (orig.)

  20. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    International Nuclear Information System (INIS)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on 'The improvement of level-1 PSA computer codes' is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author)

  1. Research on the improvement of nuclear safety -Improvement of level 1 PSA computer code package-

    Energy Technology Data Exchange (ETDEWEB)

    Park, Chang Kyoo; Kim, Tae Woon; Kim, Kil Yoo; Han, Sang Hoon; Jung, Won Dae; Jang, Seung Chul; Yang, Joon Un; Choi, Yung; Sung, Tae Yong; Son, Yung Suk; Park, Won Suk; Jung, Kwang Sub; Kang Dae Il; Park, Jin Heui; Hwang, Mi Jung; Hah, Jae Joo [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    This year is the third year of the Government-sponsored mid- and long-term nuclear power technology development project. The scope of this sub project titled on `The improvement of level-1 PSA computer codes` is divided into three main activities : (1) Methodology development on the underdeveloped fields such as risk assessment technology for plant shutdown and low power situations, (2) Computer code package development for level-1 PSA, (3) Applications of new technologies to reactor safety assessment. At first, in this area of shutdown risk assessment technology development, plant outage experiences of domestic plants are reviewed and plant operating states (POS) are decided. A sample core damage frequency is estimated for over draining event in RCS low water inventory i.e. mid-loop operation. Human reliability analysis and thermal hydraulic support analysis are identified to be needed to reduce uncertainty. Two design improvement alternatives are evaluated using PSA technique for mid-loop operation situation: one is use of containment spray system as backup of shutdown cooling system and the other is installation of two independent level indication system. Procedure change is identified more preferable option to hardware modification in the core damage frequency point of view. Next, level-1 PSA code KIRAP is converted to PC-windows environment. For the improvement of efficiency in performing PSA, the fast cutest generation algorithm and an analytical technique for handling logical loop in fault tree modeling are developed. 48 figs, 15 tabs, 59 refs. (Author).

  2. Overview of the geochemical code MINTEQ: applications to performance assessment for low-level wastes

    International Nuclear Information System (INIS)

    Graham, M.J.; Peterson, S.R.

    1985-09-01

    The MINTEQ geochemical computer code, developed at Pacific Northwest Laboratory, integrates many of the capabilities of its two immediate predecessors, WATEQ3 and MINEQL. MINTEQ can be used to perform the calculations necessary to simulate (model) the contact of low-level waste solutions with heterogeneous sediments or the interaction of ground water with solidified low-level wastes. The code is capable of performing calculations of ion speciation/solubility, adsorption, oxidation-reduction, gas phase equilibria, and precipitation/dissolution of solid phases. Under the Special Waste Form Lysimeters-Arid program, the composition of effluents (leachates) from column and batch experiments, using laboratory-scale waste forms, will be used to develop a geochemical model of the interaction of ground water with commercial solidified low-level wastes. The wastes being evaluated include power reactor waste streams that have been solidified in cement, vinyl ester-styrene, and bitumen. The thermodynamic database for the code is being upgraded before the geochemical modeling is performed. Thermodynamic data for cobalt, antimony, cerium, and cesium solid phases and aqueous species are being added to the database. The need to add these data was identified from the characterization of the waste streams. The geochemical model developed from the laboratory data will then be applied to predict the release from a field-lysimeter facility that contains full-scale waste samples. The contaminant concentrations migrating from the wastes predicted using MINTEQ will be compared to the long-term lysimeter data. This comparison will constitute a partical field validation of the geochemical model. 28 refs

  3. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  4. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  5. Introduction of gadolinium in the library of Leopard code

    International Nuclear Information System (INIS)

    Claro, L.H.; Menezes, A.

    1989-12-01

    The materials Gd-154, Gd-155, Gd-156 and Gd-157 were included in the LEOPARD code library at the request of FURNAS Centrais Eletricas S.A. Results from comparison of LEOPARD and WIMSD/4 codes for a typical cell with 7 burnup steps, are presented. (author) [pt

  6. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author).

  7. Research on the improvement of nuclear safety -Development of computing code system for level 3 PSA

    International Nuclear Information System (INIS)

    Jeong, Jong Tae; Kim, Dong Ha; Park, Won Seok; Hwang, Mi Jeong

    1995-07-01

    Among the various research areas of the level 3 PSA, the effect of terrain on the transport of radioactive material was investigated. These results will give a physical insight in the development of a new dispersion model. A wind tunnel experiment with bell shaped hill model was made in order to develop a new dispersion model. And an improved dispersion model was developed based on the concentration distribution data obtained from the wind tunnel experiment. This model will be added as an option to the atmospheric dispersion code. A stand-alone atmospheric code using MS Visual Basic programming language which runs at the Windows environment of a PC was developed. A user can easily select a necessary data file and type input data by clicking menus, and can select calculation options such building wake, plume rise etc., if necessary. And a user can easily understand the meaning of concentration distribution on the map around the plant site as well as output files. Also the methodologies for the estimation of radiation exposure and for the calculation of risks was established. These methodologies will be used for the development of modules for the radiation exposure and risks respectively. These modules will be developed independently and finally will be combined to the atmospheric dispersion code in order to develop a level 3 PSA code. 30 tabs., 56 figs., refs. (Author)

  8. Cracking the code: the accuracy of coding shoulder procedures and the repercussions.

    Science.gov (United States)

    Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M

    2013-05-01

    Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p code (odds ratio 310.0, p coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.

  9. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  10. N-acetylaspartate (NAA) levels in selected areas of the brain in patients with chronic schizophrenia treated with typical and atypical neuroleptics: a proton magnetic resonance spectroscopy (1H MRS) study.

    Science.gov (United States)

    Szulc, Agata; Galińska, Beata; Tarasów, Eugeniusz; Kubas, Bozena; Dzienis, Wojciech; Konarzewska, Beata; Poplawska, Regina; Tomczak, Anna A; Czernikiewicz, Andrzej; Walecki, Jerzy

    2007-05-01

    NAA, marker of neurons integrity and viability, is one of the most important brain metabolites visible in 1H MRS. In most studies of schizophrenia, the decrease of NAA level was observed in the temporal, frontal lobes and in the thalamus. This finding was observed more often among chronic patients, what suggests the influence of disease duration or the effect of neuroleptic treatment. The aim of the present study was the comparison of NAA levels in brain of schizophrenic patients taking typical and atypical neuroleptics. We analyzed the NAA levels in selected brain areas in 58 schizophrenic patients and 21 healthy controls. 10 patients were treated with typical neuroleptics, 10 patients with clozapine, 17 received olanzapine and 21 - risperidone. 1H MRS was performed on a 1,5 MR scanner with PRESS sequence. Voxels of 2x2x2 cm were localized in the left frontal, left temporal lobe and left thalamus. There were no differences in NAA levels between patients on typical and atypical medications analyzed together and separately (olanzapine, clozapine and risperidone groups). We also did not find any differences between patients taking selected atypical neuroleptics and controls. The NAA level in the thalamus in the group of patients receiving typical antipsychotics was the lowest among all groups and differed significantly from healthy controls. The results of our study suggest that atypical neuroleptics may have favorable effect on NAA concentration in brain of schizophrenic patients. Decrease in NAA level in patients taking typical medication may be caused by the progression of the disease or by the direct action of these drugs.

  11. Coding training for medical students: How good is diagnoses coding with ICD-10 by novices?

    Directory of Open Access Journals (Sweden)

    Stausberg, Jürgen

    2005-04-01

    Full Text Available Teaching of knowledge and competence in documentation and coding is an essential part of medical education. Therefore, coding training had been placed within the course of epidemiology, medical biometry, and medical informatics. From this, we can draw conclusions about the quality of coding by novices. One hundred and eighteen students coded diagnoses from 15 nephrological cases in homework. In addition to interrater reliability, validity was calculated by comparison with a reference coding. On the level of terminal codes, 59.3% of the students' results were correct. The completeness was calculated as 58.0%. The results on the chapter level increased up to 91.5% and 87.7% respectively. For the calculation of reliability a new, simple measure was developed that leads to values of 0.46 on the level of terminal codes and 0.87 on the chapter level for interrater reliability. The figures of concordance with the reference coding are quite similar. In contrary, routine data show considerably lower results with 0.34 and 0.63 respectively. Interrater reliability and validity of coding by novices is as good as coding by experts. The missing advantage of experts could be explained by the workload of documentation and a negative attitude to coding on the one hand. On the other hand, coding in a DRG-system is handicapped by a large number of detailed coding rules, which do not end in uniform results but rather lead to wrong and random codes. Anyway, students left the course well prepared for coding.

  12. On the Performance of the Cache Coding Protocol

    DEFF Research Database (Denmark)

    Maboudi, Behnaz; Sehat, Hadi; Pahlevani, Peyman

    2018-01-01

    Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent thes...

  13. Error-Rate Bounds for Coded PPM on a Poisson Channel

    Science.gov (United States)

    Moision, Bruce; Hamkins, Jon

    2009-01-01

    Equations for computing tight bounds on error rates for coded pulse-position modulation (PPM) on a Poisson channel at high signal-to-noise ratio have been derived. These equations and elements of the underlying theory are expected to be especially useful in designing codes for PPM optical communication systems. The equations and the underlying theory apply, more specifically, to a case in which a) At the transmitter, a linear outer code is concatenated with an inner code that includes an accumulator and a bit-to-PPM-symbol mapping (see figure) [this concatenation is known in the art as "accumulate-PPM" (abbreviated "APPM")]; b) The transmitted signal propagates on a memoryless binary-input Poisson channel; and c) At the receiver, near-maximum-likelihood (ML) decoding is effected through an iterative process. Such a coding/modulation/decoding scheme is a variation on the concept of turbo codes, which have complex structures, such that an exact analytical expression for the performance of a particular code is intractable. However, techniques for accurately estimating the performances of turbo codes have been developed. The performance of a typical turbo code includes (1) a "waterfall" region consisting of a steep decrease of error rate with increasing signal-to-noise ratio (SNR) at low to moderate SNR, and (2) an "error floor" region with a less steep decrease of error rate with increasing SNR at moderate to high SNR. The techniques used heretofore for estimating performance in the waterfall region have differed from those used for estimating performance in the error-floor region. For coded PPM, prior to the present derivations, equations for accurate prediction of the performance of coded PPM at high SNR did not exist, so that it was necessary to resort to time-consuming simulations in order to make such predictions. The present derivation makes it unnecessary to perform such time-consuming simulations.

  14. What Are They Talking About? Analyzing Code Reviews in Pull-Based Development Model

    Institute of Scientific and Technical Information of China (English)

    Zhi-Xing Li; Yue Yu; Gang Yin; Tao Wang; Huai-Min Wang

    2017-01-01

    Code reviews in pull-based model are open to community users on GitHub. Various participants are taking part in the review discussions and the review topics are not only about the improvement of code contributions but also about project evolution and social interaction. A comprehensive understanding of the review topics in pull-based model would be useful to better organize the code review process and optimize review tasks such as reviewer recommendation and pull-request prioritization. In this paper, we first conduct a qualitative study on three popular open-source software projects hosted on GitHub and construct a fine-grained two-level taxonomy covering four level-1 categories (code correctness, pull-request decision-making, project management, and social interaction) and 11 level-2 subcategories (e.g., defect detecting, reviewer assigning, contribution encouraging). Second, we conduct preliminary quantitative analysis on a large set of review comments that were labeled by TSHC (a two-stage hybrid classification algorithm), which is able to automatically classify review comments by combining rule-based and machine-learning techniques. Through the quantitative study, we explore the typical review patterns. We find that the three projects present similar comments distribution on each subcategory. Pull-requests submitted by inexperienced contributors tend to contain potential issues even though they have passed the tests. Furthermore, external contributors are more likely to break project conventions in their early contributions.

  15. Does a code make a difference – assessing the English code of practice on international recruitment

    Directory of Open Access Journals (Sweden)

    Mensah Kwadwo

    2009-04-01

    Full Text Available Abstract Background This paper draws from research completed in 2007 to assess the effect of the Department of Health, England, Code of Practice for the international recruitment of health professionals. The Department of Health in England introduced a Code of Practice for international recruitment for National Health Service employers in 2001. The Code required National Health Service employers not to actively recruit from low-income countries, unless there was government-to-government agreement. The Code was updated in 2004. Methods The paper examines trends in inflow of health professionals to the United Kingdom from other countries, using professional registration data and data on applications for work permits. The paper also provides more detailed information from two country case studies in Ghana and Kenya. Results Available data show a considerable reduction in inflow of health professionals, from the peak years up to 2002 (for nurses and 2004 (for doctors. There are multiple causes for this decline, including declining demand in the United Kingdom. In Ghana and Kenya it was found that active recruitment was perceived to have reduced significantly from the United Kingdom, but it is not clear the extent to which the Code was influential in this, or whether other factors such as a lack of vacancies in the United Kingdom explains it. Conclusion Active international recruitment of health professionals was an explicit policy intervention by the Department of Health in England, as one key element in achieving rapid staffing growth, particularly in the period 2000 to 2005, but the level of international recruitment has dropped significantly since early 2006. Regulatory and education changes in the United Kingdom in recent years have also made international entry more difficult. The potential to assess the effect of the Code in England is constrained by the limitations in available databases. This is a crucial lesson for those considering a

  16. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  17. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    Energy Technology Data Exchange (ETDEWEB)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J. [Purdue Univ., West Lafayette, IN (United States). Dept. of Nuclear Engineering; Wang, W. [SCIENTECH, Inc., Rockville, MD (United States); Mousseau, V.A.; Ebert, D.D. [Nuclear Regulatory Commission, Washington, DC (United States). Office of Nuclear Regulatory Research

    1999-03-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model.

  18. A generalized interface module for the coupling of spatial kinetics and thermal-hydraulics codes

    International Nuclear Information System (INIS)

    Barber, D.A.; Miller, R.M.; Joo, H.G.; Downar, T.J.; Mousseau, V.A.; Ebert, D.D.

    1999-01-01

    A generalized interface module has been developed for the coupling of any thermal-hydraulics code to any spatial kinetics code. The coupling scheme was designed and implemented with emphasis placed on maximizing flexibility while minimizing modifications to the respective codes. In this design, the thermal-hydraulics, general interface, and spatial kinetics codes function independently and utilize the Parallel Virtual Machine software to manage cross-process communication. Using this interface, the USNRC version of the 3D neutron kinetics code, PARCX, has been coupled to the USNRC system analysis codes RELAP5 and TRAC-M. RELAP5/PARCS assessment results are presented for two NEACRP rod ejection benchmark problems and an NEA/OECD main steam line break benchmark problem. The assessment of TRAC-M/PARCS has only recently been initiated, nonetheless, the capabilities of the coupled code are presented for a typical PWR system/core model

  19. Design considerations for view interpolation in a 3D video coding framework

    NARCIS (Netherlands)

    Morvan, Y.; Farin, D.S.; With, de P.H.N.; Lagendijk, R.L.; Weber, Jos H.; Berg, van den A.F.M.

    2006-01-01

    A 3D video stream typically consists of a set of views capturing simultaneously the same scene. For an efficient transmission of the 3D video, a compression technique is required. In this paper, we describe a coding architecture and appropriate algorithms that enable the compression and

  20. Maternal regulation of child affect in externalizing and typically-developing children.

    Science.gov (United States)

    Lougheed, Jessica P; Hollenstein, Tom; Lichtwarck-Aschoff, Anna; Granic, Isabela

    2015-02-01

    Temporal contingencies between children's affect and maternal behavior play a role in the development of children's externalizing problems. The goal of the current study was to use a microsocial approach to compare dyads with externalizing dysregulation (N =191) to healthy controls (N = 54) on maternal supportive regulation of children's negative and positive affect. Children were between the ages of 8 and 12 years. Mother-child dyads participated in conflict and positive discussions, and child affect and maternal supportive affect regulation were coded in real time. First, no group differences on overall levels of mother supportive regulation or child affect were found. Second, three event history analyses in a 2-level Cox hazard regression framework were used to predict the hazard rate of (a) maternal supportiveness, and of children's transitions (b) out of negative affect and (c) into positive affect. The hazard rate of maternal supportiveness, regardless of child affect, was not different between groups. However, as expected, the likelihood of mothers' supportive responses to children's negative affect was lower in externalizing than comparison dyads. In addition, children with externalizing problems were significantly less likely than typically developing children to transition out of negative affect in response to maternal supportiveness. The likelihood of both typically developing children and children with externalizing problems transitioning into positive affect were not related to specific occurrences of maternal supportiveness. Results of the current study show the importance of temporal dynamics in mother-child interactions in the emergence of children's externalizing problems. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  1. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  2. Plutonium-239 production rate study using a typical fusion reactor

    International Nuclear Information System (INIS)

    Faghihi, F.; Havasi, H.; Amin-Mozafari, M.

    2008-01-01

    The purpose of the present paper is to compute fissile 239 Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m 2 ) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate 239 Pu production rate. Produced 239 Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, 239 Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type

  3. Present status of transport code development based on Monte Carlo method

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki

    1985-01-01

    The present status of development in Monte Carlo code is briefly reviewed. The main items are the followings; Application fields, Methods used in Monte Carlo code (geometry spectification, nuclear data, estimator and variance reduction technique) and unfinished works, Typical Monte Carlo codes and Merits of continuous energy Monte Carlo code. (author)

  4. Application of the ASME-code-case N 47 to a typical thickwalled HTR-component made of Incoloy 800

    International Nuclear Information System (INIS)

    Kemter, F.; Schmidt, A.

    Several components of the HTR-plant are exposed to temperatures beyond 500 0 C, i.e. within the high-temperature range. The service life of those components is not only limited by fatigue damage but also mainly by creep damage and accumulated inelastic strain. These can be conservatively estimated according to the ASME-Code (high temperature part CC N47) by means of the results of elastic calculations, yet this simplified method to provide evidence often leads to calculated overloads such as the present case of the live steam collector of the steam generator of a HTR. For providing the evidence that the actual loads of the component are within permissible limits, comprehensive inelastic analyses have to be referred to in such a case. The two-dimensional inelastic analysis which is reported here in detail shows that the creep and fatigue failure as well as the inelastic extensions of the live steam collectors accumulated during the service time are below the permissible limit stated in the ASME-Code and failure of those components while used in the reactor can this be excluded. (orig.) [de

  5. A GASFLOW analysis of a steam explosion accident in a typical light-water reactor confinement building

    International Nuclear Information System (INIS)

    Travis, J.R.; Wilson, T.L.; Spore, J.W.; Lam, K.L.; Rao, D.V.

    1994-01-01

    Steam over-pressurization resulting from ex-vessel steam explosion (fuel-coolant interaction) may pose a serious challenge to the integrity of a typical light-water reactor confinement building. If the steam generation rate exceeds the removal capacity of the Airborne Activity Confinement System, confinement overpressurization occurs. Thus, there is a large potential for an uncontrolled and unfiltered release of fission products from the confinement atmosphere to the environment at the time of the steam explosion. The GASFLOW computer code was used to analyze the effects of a hypothetical steam explosion and the transport of steam and hydrogen throughout a typical light-water reactor confinement building. The effects of rapid pressurization and the resulting forces on the internal structures and the heat exchanger service bay hatch covers were calculated. Pressurization of the ventilation system and the potential damage to the ventilation fans and high-efficiency particulate air filters were assessed. Because of buoyancy forces and the calculated confinement velocity field, the hydrogen diffuses and mixes in the confinement atmosphere but tends to be transported to its upper region. (author). 2 refs., 14 figs

  6. A GASFLOW analysis of a steam explosion accident in a typical light-water reactor confinement building

    International Nuclear Information System (INIS)

    Travis, J.R.; Wilson, T.L.; Spore, J.W.; Lam, K.L.; Rao, D.V.

    1994-01-01

    Steam over-pressurization resulting from ex-vessel steam explosion (fuel-coolant interaction) may pose a serious challenge to the integrity of a typical light-water reactor confinement building. If the steam generation rate exceeds the removal capacity of the Airborne Activity Confinement System, confinement over pressurization occurs. Thus, there is a large potential for an uncontrolled and unfiltered release of fission products from the confinement atmosphere to the environment at the time of the steam explosion. The GASFLOW computer code was used to analyze the effects of a hypothetical steam explosion and the transport of steam and hydrogen throughout a typical light-water reactor confinement building. The effects of rapid pressurization and the resulting forces on the internal structures and the heat exchanger service bay hatch covers were calculated. Pressurization of the ventilation system and the potential damage to the ventilation fans and high-efficiency particulate air filters were assessed. Because of buoyancy forces and the calculated confinement velocity field, the hydrogen diffuses and mixes in the confinement atmosphere but tends to be transported to its upper region

  7. Coupling Legacy and Contemporary Deterministic Codes to Goldsim for Probabilistic Assessments of Potential Low-Level Waste Repository Sites

    Science.gov (United States)

    Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.

    2006-12-01

    Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer

  8. Four-D propagation code for high-energy laser beams: a user's manual

    Energy Technology Data Exchange (ETDEWEB)

    Morris, J.R.

    1976-08-05

    This manual describes the use and structure of the June 30, 1976 version of the Four-D propagation code for high energy laser beams. It provides selected sample output from a typical run and from several debug runs. The Four-D code now includes the important noncoplanar scenario feature. Many problems that required excessive computer time can now be meaningfully simulated as steady-state noncoplanar problems with short run times.

  9. State of art in FE-based fuel performance codes

    International Nuclear Information System (INIS)

    Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2013-01-01

    Fuel performance codes approximate this complex behavior using an axisymmetric, axially-stacked, one-dimensional radial representation to save computation cost. However, the need for improved modeling of PCMI and, particularly, the importance of multidimensional capability for accurate fuel performance simulation has been identified as safety margin decreases. Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed

  10. NRC model simulations in support of the hydrologic code intercomparison study (HYDROCOIN): Level 1-code verification

    International Nuclear Information System (INIS)

    1988-03-01

    HYDROCOIN is an international study for examining ground-water flow modeling strategies and their influence on safety assessments of geologic repositories for nuclear waste. This report summarizes only the combined NRC project temas' simulation efforts on the computer code bench-marking problems. The codes used to simulate thesee seven problems were SWIFT II, FEMWATER, UNSAT2M USGS-3D, AND TOUGH. In general, linear problems involving scalars such as hydraulic head were accurately simulated by both finite-difference and finite-element solution algorithms. Both types of codes produced accurate results even for complex geometrics such as intersecting fractures. Difficulties were encountered in solving problems that invovled nonlinear effects such as density-driven flow and unsaturated flow. In order to fully evaluate the accuracy of these codes, post-processing of results using paricle tracking algorithms and calculating fluxes were examined. This proved very valuable by uncovering disagreements among code results even through the hydraulic-head solutions had been in agreement. 9 refs., 111 figs., 6 tabs

  11. A bar coding system for environmental projects

    International Nuclear Information System (INIS)

    Barber, R.B.; Hunt, B.J.; Burgess, G.M.

    1988-01-01

    This paper presents BeCode systems, a bar coding system which provides both nuclear and commercial clients with a data capture and custody management program that is accurate, timely, and beneficial to all levels of project operations. Using bar code identifiers is an essentially paperless and error-free method which provides more efficient delivery of data through its menu card-driven structure, which speeds collection of essential data for uploading to a compatible device. The effects of this sequence include real-time information for operator analysis, management review, audits, planning, scheduling, and cost control

  12. Creep strain accumulation in a typical LMFBR piperun

    International Nuclear Information System (INIS)

    Johnstone, T.L.

    1975-01-01

    The analysis described allows the strain concentrations in typical LMFBR two anchor point uniplanar piperuns to be calculated. Account is taken of the effect of pipe elbows in attracting creep strain to themselves as well as possible movements of the thrust line due to strain redistribution. The influence of the initial load conditions is also examined. The stress relaxation analysis is facilitated by making the assumption that a cross-sectional stress distribution determined by the asymptotic fully developed state of creep exists at all times. Use is then made of Hoff(s) analogy between materials with a creep law of the Norton type and those with a corresponding non-linear elastic stress strain law, to determine complementary strain energy rates for straight pipes and bends. Ovalisation of the latter produces an increased strain energy rate which can be simply calculated by comparison with an equal length of straight pipe through employing a creep flexibility factor due to Spence. Deflection rates at any location in the pipework can then be evaluated in terms of the thermal restraint forces at that location by an application of Castigliano's principle. In particular for an anchor point the deflection rates are identically zero and this leads to the generation of 3 simultaneous differential equations determining the relaxation of the anchor reactions. Indicative results are presented for the continuous relaxation at 570 deg C of the thermally induced stress in a planar approximation to a typical LMFBR pipe run chosen to have peak elbow stresses close to the code maximum. The results indicate a ratio, after 10 5 hours, of 3 for creep strain concentration relative to initial peak strain (calculated on the assumption of fully elastic behavior) in the most severely affected elbow, when either austenitic 316 or 321 creep properties are employed

  13. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Science.gov (United States)

    Zhang, Ai-bing; Feng, Jie; Ward, Robert D; Wan, Ping; Gao, Qiang; Wu, Jun; Zhao, Wei-zhong

    2012-01-01

    Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI) region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS) genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF) to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish) and two representing non-coding ITS barcodes (rust fungi and brown algae). Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ) and Maximum likelihood (ML) methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI) of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40%) for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37%) for 1094 brown algae queries, both using ITS barcodes.

  14. A new method for species identification via protein-coding and non-coding DNA barcodes by combining machine learning with bioinformatic methods.

    Directory of Open Access Journals (Sweden)

    Ai-bing Zhang

    Full Text Available Species identification via DNA barcodes is contributing greatly to current bioinventory efforts. The initial, and widely accepted, proposal was to use the protein-coding cytochrome c oxidase subunit I (COI region as the standard barcode for animals, but recently non-coding internal transcribed spacer (ITS genes have been proposed as candidate barcodes for both animals and plants. However, achieving a robust alignment for non-coding regions can be problematic. Here we propose two new methods (DV-RBF and FJ-RBF to address this issue for species assignment by both coding and non-coding sequences that take advantage of the power of machine learning and bioinformatics. We demonstrate the value of the new methods with four empirical datasets, two representing typical protein-coding COI barcode datasets (neotropical bats and marine fish and two representing non-coding ITS barcodes (rust fungi and brown algae. Using two random sub-sampling approaches, we demonstrate that the new methods significantly outperformed existing Neighbor-joining (NJ and Maximum likelihood (ML methods for both coding and non-coding barcodes when there was complete species coverage in the reference dataset. The new methods also out-performed NJ and ML methods for non-coding sequences in circumstances of potentially incomplete species coverage, although then the NJ and ML methods performed slightly better than the new methods for protein-coding barcodes. A 100% success rate of species identification was achieved with the two new methods for 4,122 bat queries and 5,134 fish queries using COI barcodes, with 95% confidence intervals (CI of 99.75-100%. The new methods also obtained a 96.29% success rate (95%CI: 91.62-98.40% for 484 rust fungi queries and a 98.50% success rate (95%CI: 96.60-99.37% for 1094 brown algae queries, both using ITS barcodes.

  15. High efficiency video coding (HEVC) algorithms and architectures

    CERN Document Server

    Budagavi, Madhukar; Sullivan, Gary

    2014-01-01

    This book provides developers, engineers, researchers and students with detailed knowledge about the High Efficiency Video Coding (HEVC) standard. HEVC is the successor to the widely successful H.264/AVC video compression standard, and it provides around twice as much compression as H.264/AVC for the same level of quality. The applications for HEVC will not only cover the space of the well-known current uses and capabilities of digital video – they will also include the deployment of new services and the delivery of enhanced video quality, such as ultra-high-definition television (UHDTV) and video with higher dynamic range, wider range of representable color, and greater representation precision than what is typically found today. HEVC is the next major generation of video coding design – a flexible, reliable and robust solution that will support the next decade of video applications and ease the burden of video on world-wide network traffic. This book provides a detailed explanation of the various parts ...

  16. Uniform Circular Antenna Array Applications in Coded DS-CDMA Mobile Communication Systems

    National Research Council Canada - National Science Library

    Seow, Tian

    2003-01-01

    ...) has greatly increased. This thesis examines the use of an equally spaced circular adaptive antenna array at the mobile station for a typical coded direct sequence code division multiple access (DS-CDMA...

  17. SOMM: A new service oriented middleware for generic wireless multimedia sensor networks based on code mobility.

    Science.gov (United States)

    Faghih, Mohammad Mehdi; Moghaddam, Mohsen Ebrahimi

    2011-01-01

    Although much research in the area of Wireless Multimedia Sensor Networks (WMSNs) has been done in recent years, the programming of sensor nodes is still time-consuming and tedious. It requires expertise in low-level programming, mainly because of the use of resource constrained hardware and also the low level API provided by current operating systems. The code of the resulting systems has typically no clear separation between application and system logic. This minimizes the possibility of reusing code and often leads to the necessity of major changes when the underlying platform is changed. In this paper, we present a service oriented middleware named SOMM to support application development for WMSNs. The main goal of SOMM is to enable the development of modifiable and scalable WMSN applications. A network which uses the SOMM is capable of providing multiple services to multiple clients at the same time with the specified Quality of Service (QoS). SOMM uses a virtual machine with the ability to support mobile agents. Services in SOMM are provided by mobile agents and SOMM also provides a t space on each node which agents can use to communicate with each other.

  18. Hierarchical surface code for network quantum computing with modules of arbitrary size

    Science.gov (United States)

    Li, Ying; Benjamin, Simon C.

    2016-10-01

    The network paradigm for quantum computing involves interconnecting many modules to form a scalable machine. Typically it is assumed that the links between modules are prone to noise while operations within modules have a significantly higher fidelity. To optimize fault tolerance in such architectures we introduce a hierarchical generalization of the surface code: a small "patch" of the code exists within each module and constitutes a single effective qubit of the logic-level surface code. Errors primarily occur in a two-dimensional subspace, i.e., patch perimeters extruded over time, and the resulting noise threshold for intermodule links can exceed ˜10 % even in the absence of purification. Increasing the number of qubits within each module decreases the number of qubits necessary for encoding a logical qubit. But this advantage is relatively modest, and broadly speaking, a "fine-grained" network of small modules containing only about eight qubits is competitive in total qubit count versus a "course" network with modules containing many hundreds of qubits.

  19. Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications

    Directory of Open Access Journals (Sweden)

    Huy Le

    2017-09-01

    Full Text Available For decades, researchers have been trying to create intuitive virtual environments by blending reality and virtual reality, thus enabling general users to interact with the digital domain as easily as with the real world. The result is “augmented reality” (AR. AR seamlessly superimposes virtual objects on to a real environment in three dimensions (3D and in real time. One of the most important parts that helps close the gap between virtuality and reality is the marker used in the AR system. While pictorial marker and bar-code marker are the two most commonly used marker types in the market, they have some disadvantages in visual and processing performance. In this paper, we present a novelty method that combines the bar-code with the original feature of a colour picture (e.g., photos, trading cards, advertisement’s figure. Our method decorates on top of the original pictorial images additional features with a single stereogram image that optically conceals a multi-level (3D bar-code. Thus, it has a larger capability of storing data compared to the general 1D barcode. This new type of marker has the potential of addressing the issues that the current types of marker are facing. It not only keeps the original information of the picture but also contains encoded numeric information. In our limited evaluation, this pictorial bar-code shows a relatively robust performance under various conditions and scaling; thus, it provides a promising AR approach to be used in many applications such as trading card games, educations, and advertisements.

  20. Testing efficiency transfer codes for equivalence

    International Nuclear Information System (INIS)

    Vidmar, T.; Celik, N.; Cornejo Diaz, N.; Dlabac, A.; Ewa, I.O.B.; Carrazana Gonzalez, J.A.; Hult, M.; Jovanovic, S.; Lepy, M.-C.; Mihaljevic, N.; Sima, O.; Tzika, F.; Jurado Vargas, M.; Vasilopoulou, T.; Vidmar, G.

    2010-01-01

    Four general Monte Carlo codes (GEANT3, PENELOPE, MCNP and EGS4) and five dedicated packages for efficiency determination in gamma-ray spectrometry (ANGLE, DETEFF, GESPECOR, ETNA and EFFTRAN) were checked for equivalence by applying them to the calculation of efficiency transfer (ET) factors for a set of well-defined sample parameters, detector parameters and energies typically encountered in environmental radioactivity measurements. The differences between the results of the different codes never exceeded a few percent and were lower than 2% in the majority of cases.

  1. Development of a detailed core flow analysis code for prismatic fuel reactors

    International Nuclear Information System (INIS)

    Bennett, R.G.

    1990-01-01

    The detailed analysis of the core flow distribution in prismatic fuel reactors is of interest for modular high-temperature gas-cooled reactor (MHTGR) design and safety analyses. Such analyses involve the steady-state flow of helium through highly cross-connected flow paths in and around the prismatic fuel elements. Several computer codes have been developed for this purpose. However, since they are proprietary codes, they are not generally available for independent MHTGR design confirmation. The previously developed codes do not consider the exchange or diversion of flow between individual bypass gaps with much detail. Such a capability could be important in the analysis of potential fuel block motion, such as occurred in the Fort St. Vrain reactor, or for the analysis of the conditions around a flow blockage or misloaded fuel block. This work develops a computer code with fairly general-purpose capabilities for modeling the flow in regions of prismatic fuel cores. The code, called BYPASS solves a finite difference control volume formulation of the compressible, steady-state fluid flow in highly cross-connected flow paths typical of the MHTGR

  2. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  3. The LEONAR code: a new tool for PSA Level 2 analyses

    International Nuclear Information System (INIS)

    Tourniaire, B; Spindler, B.; Ratel, G.; Seiler, J.M.; Iooss, B.; Marques, M.; Gaudier, F.; Greffier, G.

    2011-01-01

    The LEONAR code, complementary to integral codes such as MAAP or ASTEC, is a new severe accident simulation tool which can calculate easily 1000 late phase reactor situations within a few hours and provide a statistical evaluation of the situations. LEONAR can be used for the analysis of the impact on the failure probabilities of specific Severe Accident Management measures (for instance: water injection) or design modifications (for instance: pressure vessel flooding or dedicated reactor pit flooding), or to focus the research effort on key phenomena. The starting conditions for LEONAR are a set of core melting situations that are separately calculated from a core degradation code (such as MAAP, which is used by EDF). LEONAR describes the core melt evolution after flooding in the core, the corium relocation in the lower head (under dry and wet conditions), the evolution of corium in the lower head including the effect of flooding, the vessel failure, corium relocation in the reactor cavity, interaction between corium and basemat concrete, possible corium spreading in the neighbour rooms, on the containment floor. Scenario events as well as specific physical model parameters are characterised by a probability density distribution. The probabilistic evaluation is performed by URANIE that is coupled to the physical calculations. The calculation results are treated in a statistical way in order to provide easily usable information. This tool can be used to identify the main parameters that influence corium coolability for severe accident late phases. It is aimed to replace efficiently PIRT exercises. An important impact of such a tool is that it can be used to make a demonstration that the probability of basemat failure can be significantly reduced by coupling a number of separate severe accident management measures or design modifications despite each separate measure is not sufficient by itself to avoid the failure. (authors)

  4. SIMULATE-3 K coupled code applications

    Energy Technology Data Exchange (ETDEWEB)

    Joensson, Christian [Studsvik Scandpower AB, Vaesteraas (Sweden); Grandi, Gerardo; Judd, Jerry [Studsvik Scandpower Inc., Idaho Falls, ID (United States)

    2017-07-15

    This paper describes the coupled code system TRACE/SIMULATE-3 K/VIPRE and the application of this code system to the OECD PWR Main Steam Line Break. A short description is given for the application of the coupled system to analyze DNBR and the flexibility the system creates for the user. This includes the possibility to compare and evaluate the result with the TRACE/SIMULATE-3K (S3K) coupled code, the S3K standalone code (core calculation) as well as performing single-channel calculations with S3K and VIPRE. This is the typical separate-effect-analyses required for advanced calculations in order to develop methodologies to be used for safety analyses in general. The models and methods of the code systems are presented. The outline represents the analysis approach starting with the coupled code system, reactor and core model calculation (TRACE/S3K). This is followed by a more detailed core evaluation (S3K standalone) and finally a very detailed thermal-hydraulic investigation of the hot pin condition (VIPRE).

  5. Preliminary structural integrity evaluations for the elevated temperature piping of the SFR IHTS against typical level a service events

    International Nuclear Information System (INIS)

    Park, Chang-Gyu; Kim, Jong-Bum; Lee, Jae-Han

    2009-01-01

    The SFR is adapting the IHTS(Intermediate Heat Transport System) to prevent the interaction of radioactive primary sodium and SG(Steam Generator) water. The IHTS hot leg piping connecting the IHX(Intermediate Heat eXchanger) to the SG of a 1200MWe pool-type SFR is an object component in this study. ASME Boiler and Pressure Vessel code Subsection NB provides rules for the design and analysis of the class 1 components. At an elevated temperature service, the ASME Subsection NH provides rules for the design and analysis of the Class 1 components but unfortunately, special rules for piping components are not provided until now. Therefore, the design and analysis of the IHTS hot leg piping shall comply with the design by analysis requirements of Subsection NH. The piping layout is proposed by considering the reactor component layout and reactor building space and the structural integrity is evaluated by considering two typical types of operating events in this study. Cycle type 1(CT-1) shows the refueling cycle event having a temperature history from a refueling temperature to a normal operating temperature via a hot standby temperature. Cycle type 2(CT-2) is a daily load follow operation. The structural integrity is evaluated by considering the enveloped CT-1 and CT-2 operating events per the ASME Subsection NH procedures. The SIE ASME-NH computer program, which has been developed to implement the ASME subsection NH rules, is used for the structural integrity evaluation by utilizing the finite element analysis results. (author)

  6. A CABAC codec of H.264AVC with secure arithmetic coding

    Science.gov (United States)

    Neji, Nihel; Jridi, Maher; Alfalou, Ayman; Masmoudi, Nouri

    2013-02-01

    This paper presents an optimized H.264/AVC coding system for HDTV displays based on a typical flow with high coding efficiency and statics adaptivity features. For high quality streaming, the codec uses a Binary Arithmetic Encoding/Decoding algorithm with high complexity and a JVCE (Joint Video compression and encryption) scheme. In fact, particular attention is given to simultaneous compression and encryption applications to gain security without compromising the speed of transactions [1]. The proposed design allows us to encrypt the information using a pseudo-random number generator (PRNG). Thus we achieved the two operations (compression and encryption) simultaneously and in a dependent manner which is a novelty in this kind of architecture. Moreover, we investigated the hardware implementation of CABAC (Context-based adaptive Binary Arithmetic Coding) codec. The proposed architecture is based on optimized binarizer/de-binarizer to handle significant pixel rates videos with low cost and high performance for most frequent SEs. This was checked using HD video frames. The obtained synthesis results using an FPGA (Xilinx's ISE) show that our design is relevant to code main profile video stream.

  7. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2012-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  8. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad, P.E.; L.E. Guillen, P.E.; C. W. McKnight, P.E.; D. S. Ferguson, P.E.

    2014-06-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  9. Remote-Handled Low-Level Waste Disposal Project Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    Austad, S. L. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Guillen, L. E. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McKnight, C. W. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Ferguson, D. S. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-04-01

    The Remote-Handled Low-Level Waste (LLW) Disposal Project addresses an anticipated shortfall in remote-handled LLW disposal capability following cessation of operations at the existing facility, which will continue until it is full or until it must be closed in preparation for final remediation of the Subsurface Disposal Area (approximately at the end of Fiscal Year 2017). Development of a new onsite disposal facility will provide necessary remote-handled LLW disposal capability and will ensure continuity of operations that generate remote-handled LLW. This report documents the Code of Record for design of a new LLW disposal capability. The report is owned by the Design Authority, who can authorize revisions and exceptions. This report will be retained for the lifetime of the facility.

  10. A one-dimensional transport code for the simulation of D-T burning tokamak plasma

    International Nuclear Information System (INIS)

    Tone, Tatsuzo; Maki, Koichi; Kasai, Masao; Nishida, Hidetsugu

    1980-11-01

    A one-dimensional transport code for D-T burning tokamak plasma has been developed, which simulates the spatial behavior of fuel ions(D, T), alpha particles, impurities, temperatures of ions and electrons, plasma current, neutrals, heating of alpha and injected beam particles. The basic transport equations are represented by one generalized equation so that the improvement of models and the addition of new equations may be easily made. A model of burn control using a variable toroidal field ripple is employed. This report describes in detail the simulation model, numerical method and the usage of the code. Some typical examples to which the code has been applied are presented. (author)

  11. Weight Distribution for Non-binary Cluster LDPC Code Ensemble

    Science.gov (United States)

    Nozaki, Takayuki; Maehara, Masaki; Kasai, Kenta; Sakaniwa, Kohichi

    In this paper, we derive the average weight distributions for the irregular non-binary cluster low-density parity-check (LDPC) code ensembles. Moreover, we give the exponential growth rate of the average weight distribution in the limit of large code length. We show that there exist $(2,d_c)$-regular non-binary cluster LDPC code ensembles whose normalized typical minimum distances are strictly positive.

  12. Ex-plant consequence assessment for NUREG-1150: models, typical results, uncertainties

    International Nuclear Information System (INIS)

    Sprung, J.L.

    1988-01-01

    The assessment of ex-plant consequences for NUREG-1150 source terms was performed using the MELCOR Accident Consequence Code System (MACCS). This paper briefly discusses the following elements of MACCS consequence calculations: input data, phenomena modeled, computational framework, typical results, controlling phenomena, and uncertainties. Wherever possible, NUREG-1150 results will be used to illustrate the discussion. 28 references

  13. Plutonium-239 production rate study using a typical fusion reactor

    Energy Technology Data Exchange (ETDEWEB)

    Faghihi, F. [Research Center for Radiation Protection, Shiraz University, Shiraz (Iran, Islamic Republic of)], E-mail: faghihif@shirazu.ac.ir; Havasi, H.; Amin-Mozafari, M. [Department of Nuclear Engineering, School of Engineering, Shiraz University, 71348-51154 Shiraz (Iran, Islamic Republic of)

    2008-05-15

    The purpose of the present paper is to compute fissile {sup 239}Pu material by supposed typical fusion reactor operation to make the fuel requirement for other purposes (e.g. MOX fissile fuel, etc.). It is assumed that there is a fusion reactor has a cylindrical geometry and uses uniformly distributed deuterium-tritium as fuel so that neutron wall load is taken at 10(MW)/(m{sup 2}) . Moreover, the reactor core is surrounded by six suggested blankets to make best performance of the physical conditions described herein. We determined neutron flux in each considered blanket as well as tritium self-sufficiency using two groups neutron energy and then computation is followed by the MCNP-4C code. Finally, material depletion according to a set of dynamical coupled differential equations is solved to estimate {sup 239}Pu production rate. Produced {sup 239}Pu is compared with two typical fission reactors to find performance of plutonium breeding ratio in the case of the fusion reactor. We found that 0.92% of initial U is converted into fissile Pu by our suggested fusion reactor with thermal power of 3000 MW. For comparison, {sup 239}Pu yield of suggested large scale PWR is about 0.65% and for LMFBR is close to 1.7%. The results show that the fusion reactor has an acceptable efficiency for Pu production compared with a large scale PWR fission reactor type.

  14. Coding of Stimuli by Animals: Retrospection, Prospection, Episodic Memory and Future Planning

    Science.gov (United States)

    Zentall, Thomas R.

    2010-01-01

    When animals code stimuli for later retrieval they can either code them in terms of the stimulus presented (as a retrospective memory) or in terms of the response or outcome anticipated (as a prospective memory). Although retrospective memory is typically assumed (as in the form of a memory trace), evidence of prospective coding has been found…

  15. A strategy for the application of steam explosion codes to reactor analysis

    International Nuclear Information System (INIS)

    Moriyama, Kiyofumi; Nakamura, Hideo

    2006-01-01

    A technical view on the strategy for the application of steam explosion codes for plant scale analysis is described. It includes assumption of triggering at the time of peak premixed melt mass, tuning of the explosion model on typical alumina steam explosion data, consideration of void and solidification effects as primary mechanism to limit the premixed mass and explosion energetics, choice of simple heat partition models affecting evaporation. The view was developed through experiences in development, verification and application of a steam explosion simulation code, JASMINE, at Japan Atomic Energy Agency (JAEA), as well as participation in OECD SERENA Phase-1 program. (author)

  16. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  17. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  18. Food Image Recognition via Superpixel Based Low-Level and Mid-Level Distance Coding for Smart Home Applications

    Directory of Open Access Journals (Sweden)

    Jiannan Zheng

    2017-05-01

    Full Text Available Food image recognition is a key enabler for many smart home applications such as smart kitchen and smart personal nutrition log. In order to improve living experience and life quality, smart home systems collect valuable insights of users’ preferences, nutrition intake and health conditions via accurate and robust food image recognition. In addition, efficiency is also a major concern since many smart home applications are deployed on mobile devices where high-end GPUs are not available. In this paper, we investigate compact and efficient food image recognition methods, namely low-level and mid-level approaches. Considering the real application scenario where only limited and noisy data are available, we first proposed a superpixel based Linear Distance Coding (LDC framework where distinctive low-level food image features are extracted to improve performance. On a challenging small food image dataset where only 12 training images are available per category, our framework has shown superior performance in both accuracy and robustness. In addition, to better model deformable food part distribution, we extend LDC’s feature-to-class distance idea and propose a mid-level superpixel food parts-to-class distance mining framework. The proposed framework show superior performance on a benchmark food image datasets compared to other low-level and mid-level approaches in the literature.

  19. Lack of effects of typical and atypical antipsychotics in DARPP-32 and NCS-1 levels in PC12 cells overexpressing NCS-1

    Directory of Open Access Journals (Sweden)

    Reis Helton J

    2010-06-01

    Full Text Available Abstract Background Schizophrenia is the major psychiatry disorder, which the exact cause remains unknown. However, it is well known that dopamine-mediated neurotransmission imbalance is associated with this pathology and the main target of antipsychotics is the dopamine receptor D2. Recently, it was described alteration in levels of two dopamine signaling related proteins in schizophrenic prefrontal cortex (PFC: Neuronal Calcium Sensor-1 (NCS-1 and DARPP-32. NCS-1, which is upregulated in PFC of schizophrenics, inhibits D2 internalization. DARPP-32, which is decreased in PFC of schizophrenics, is a key downstream effector in transducing dopamine signaling. We previously demonstrated that antipsychotics do not change levels of both proteins in rat's brain. However, since NCS-1 and DARPP-32 levels are not altered in wild type rats, we treated wild type PC12 cells (PC12 WT and PC12 cells stably overexpressing NCS-1 (PC12 Clone with antipsychotics to investigate if NCS-1 upregulation modulates DARPP-32 expression in response to antipsychotics treatment. Results We chronically treated both PC12 WT and PC12 Clone cells with typical (Haloperidol or atypical (Clozapine and Risperidone antipsychotics for 14 days. Using western blot technique we observed that there is no change in NCS-1 and DARPP-32 protein levels in both PC12 WT and PC12 Clone cells after typical and atypical antipsychotic treatments. Conclusions Because we observed no alteration in NCS-1 and DARPP-32 levels in both PC12 WT and Clone cells treated with typical or atypical antipsychotics, we suggest that the alteration in levels of both proteins in schizophrenic's PFC is related to psychopathology but not with antipsychotic treatment.

  20. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2017-04-15

    This paper presents the radiation shielding model of a typical PWR (CNPP-II) at Chashma, Pakistan. The model was developed using Monte Carlo N Particle code [2], equipped with ENDF/B-VI continuous energy cross section libraries. This model was applied to calculate the neutron and gamma flux and dose rates in the radial direction at core mid plane. The simulated results were compared with the reference results of Shanghai Nuclear Engineering Research and Design Institute (SNERDI).

  1. Reflux episodes and esophageal impedance levels in patients with typical and atypical symptoms of gastroesophageal reflux disease

    Science.gov (United States)

    Ye, Bi Xing; Jiang, Liu Qin; Lin, Lin; Wang, Ying; Wang, Meifeng

    2017-01-01

    Abstract To determine the relationship between baseline impedance levels and gastroesophageal reflux, we retrospectively enrolled 110 patients (54 men and 56 female; mean age, 51 ± 14 years) with suspected gastroesophageal reflux disease (GERD) who underwent 24-h multichannel intraluminal impedance and pH monitoring. Patients were stratified according to symptom (typical or atypical) and reflux types (acid reflux, nonacid reflux [NAR], or no abnormal reflux). Mean nocturnal baseline impedance (MNBI) were measured 3 cm (distal esophagus) and 17 cm (proximal esophagus) above the lower esophageal sphincter. Median distal esophageal MNBI was lower in the acid reflux group (1244 Ω; 647–1969 Ω) than in the NAR (2586 Ω; 1368–3666 Ω) or no abnormal reflux groups (3082 Ω; 2495–4472 Ω; all P < .05). Distal MNBI were negatively correlated with DeMeester score and acid exposure time. Atypical symptoms were more frequently associated with NAR than typical symptoms (P < .01). Among patients with positive symptom-association probability (SAP) for NAR, median proximal MNBI tended to be lower in patients with typical symptoms (median, 3013 Ω; IQR, 2535–3410 Ω) than in those with atypical symptoms (median, 3386 Ω; IQR, 3044–3730 Ω, P = .05). Thus, atypical GERD symptoms were more likely to be associated with NAR. The mucosal integrity of the proximal esophagus might be relatively impaired in GERD patients with typical symptoms for NAR. PMID:28906377

  2. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Executive summary

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written to provide guidance to managers and site operators on how ground-water transport codes should be selected for assessing burial site performance. There is a need for a formal approach to selecting appropriate codes from the multitude of potentially useful ground-water transport codes that are currently available. Code selection is a problem that requires more than merely considering mathematical equation-solving methods. These guidelines are very general and flexible and are also meant for developing systems simulation models to be used to assess the environmental safety of low-level waste burial facilities. Code selection is only a single aspect of the overall objective of developing a systems simulation model for a burial site. The guidance given here is mainly directed toward applications-oriented users, but managers and site operators need to be familiar with this information to direct the development of scientifically credible and defensible transport assessment models. Some specific advice for managers and site operators on how to direct a modeling exercise is based on the following five steps: identify specific questions and study objectives; establish costs and schedules for achieving answers; enlist the aid of professional model applications group; decide on approach with applications group and guide code selection; and facilitate the availability of site-specific data. These five steps for managers/site operators are discussed in detail following an explanation of the nine systems model development steps, which are presented first to clarify what code selection entails

  3. Different Stimuli, Different Spatial Codes: A Visual Map and an Auditory Rate Code for Oculomotor Space in the Primate Superior Colliculus

    Science.gov (United States)

    Lee, Jungah; Groh, Jennifer M.

    2014-01-01

    Maps are a mainstay of visual, somatosensory, and motor coding in many species. However, auditory maps of space have not been reported in the primate brain. Instead, recent studies have suggested that sound location may be encoded via broadly responsive neurons whose firing rates vary roughly proportionately with sound azimuth. Within frontal space, maps and such rate codes involve different response patterns at the level of individual neurons. Maps consist of neurons exhibiting circumscribed receptive fields, whereas rate codes involve open-ended response patterns that peak in the periphery. This coding format discrepancy therefore poses a potential problem for brain regions responsible for representing both visual and auditory information. Here, we investigated the coding of auditory space in the primate superior colliculus(SC), a structure known to contain visual and oculomotor maps for guiding saccades. We report that, for visual stimuli, neurons showed circumscribed receptive fields consistent with a map, but for auditory stimuli, they had open-ended response patterns consistent with a rate or level-of-activity code for location. The discrepant response patterns were not segregated into different neural populations but occurred in the same neurons. We show that a read-out algorithm in which the site and level of SC activity both contribute to the computation of stimulus location is successful at evaluating the discrepant visual and auditory codes, and can account for subtle but systematic differences in the accuracy of auditory compared to visual saccades. This suggests that a given population of neurons can use different codes to support appropriate multimodal behavior. PMID:24454779

  4. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  5. Building codes as barriers to solar heating and cooling of buildings

    Energy Technology Data Exchange (ETDEWEB)

    Meeker, F.O. III

    1978-04-01

    The application of building codes to solar energy systems for heating and cooling of buildings is discussed, using as typical codes the three model building codes most widely adopted by states and localities. Some potential barriers to solar energy systems are found, federal and state programs to deal with these barriers are discussed, and alternatives are suggested. To remedy this, a federal program is needed to encourage state adoption of standards and acceptance of certification of solar systems for code approval, and to encourage revisions to codes based on model legislation prepared for the federal government by the model codes groups.

  6. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  7. Nexus: A modular workflow management system for quantum simulation codes

    Science.gov (United States)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  8. A long and abundant non-coding RNA in Lactobacillus salivarius.

    Science.gov (United States)

    Cousin, Fabien J; Lynch, Denise B; Chuat, Victoria; Bourin, Maxence J B; Casey, Pat G; Dalmasso, Marion; Harris, Hugh M B; McCann, Angela; O'Toole, Paul W

    2017-09-01

    Lactobacillus salivarius , found in the intestinal microbiota of humans and animals, is studied as an example of the sub-dominant intestinal commensals that may impart benefits upon their host. Strains typically harbour at least one megaplasmid that encodes functions contributing to contingency metabolism and environmental adaptation. RNA sequencing (RNA-seq)transcriptomic analysis of L. salivarius strain UCC118 identified the presence of a novel unusually abundant long non-coding RNA (lncRNA) encoded by the megaplasmid, and which represented more than 75 % of the total RNA-seq reads after depletion of rRNA species. The expression level of this 520 nt lncRNA in L. salivarius UCC118 exceeded that of the 16S rRNA, it accumulated during growth, was very stable over time and was also expressed during intestinal transit in a mouse. This lncRNA sequence is specific to the L. salivarius species; however, among 45 L . salivarius genomes analysed, not all (only 34) harboured the sequence for the lncRNA. This lncRNA was produced in 27 tested L. salivarius strains, but at strain-specific expression levels. High-level lncRNA expression correlated with high megaplasmid copy number. Transcriptome analysis of a deletion mutant lacking this lncRNA identified altered expression levels of genes in a number of pathways, but a definitive function of this new lncRNA was not identified. This lncRNA presents distinctive and unique properties, and suggests potential basic and applied scientific developments of this phenomenon.

  9. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  10. Uncertainty and sensitivity analysis applied to coupled code calculations for a VVER plant transient

    International Nuclear Information System (INIS)

    Langenbuch, S.; Krzykacz-Hausmann, B.; Schmidt, K. D.

    2004-01-01

    The development of coupled codes, combining thermal-hydraulic system codes and 3D neutron kinetics, is an important step to perform best-estimate plant transient calculations. It is generally agreed that the application of best-estimate methods should be supplemented by an uncertainty and sensitivity analysis to quantify the uncertainty of the results. The paper presents results from the application of the GRS uncertainty and sensitivity method for a VVER-440 plant transient, which was already studied earlier for the validation of coupled codes. For this application, the main steps of the uncertainty method are described. Typical results of the method applied to the analysis of the plant transient by several working groups using different coupled codes are presented and discussed The results demonstrate the capability of an uncertainty and sensitivity analysis. (authors)

  11. Elevated levels of selenium in the typical diet of Amazonian riverside populations

    Energy Technology Data Exchange (ETDEWEB)

    Lemire, Melanie, E-mail: lemire.melanie@courrier.uqam.ca [Centre de recherche interdisciplinaire sur la biologie, la sante, la societe et l' environnement (CINBIOSE), Universite du Quebec a Montreal, Montreal (Canada); Fillion, Myriam [Centre de recherche interdisciplinaire sur la biologie, la sante, la societe et l' environnement (CINBIOSE), Universite du Quebec a Montreal, Montreal (Canada); Barbosa, Fernando [Depto. de Analises Clinicas, Toxicologicas e Bromatologicas, Universidade de Sao Paulo, Ribeirao Preto (Brazil); Guimaraes, Jean Remy Davee [Instituto de Biofisica, Universidade Federal do Rio de Janeiro, Rio de Janeiro (Brazil); Mergler, Donna [Centre de recherche interdisciplinaire sur la biologie, la sante, la societe et l' environnement (CINBIOSE), Universite du Quebec a Montreal, Montreal (Canada)

    2010-09-01

    Selenium (Se) intake is generally from food, whose Se content depends on soil Se and plant accumulation. For humans, adequate Se intake is essential for several selenoenzymes. In the Lower Tapajos region of the Brazilian Amazon, Se status is elevated with large inter-community variability. Se intake in this region, where Hg exposure is among the highest in the world, may be important to counteract mercury (Hg) toxicity. The present study was conducted in 2006 with 155 persons from four communities of the Lower Tapajos. The objectives were: i) to evaluate Se content in their typical diet and drinking water; ii) to compare food Se concentrations with respect to geographic location; and iii) to examine the contribution of consumption of different food items to blood Se. More than 400 local foods and 40 drinking water samples were collected. Participants responded to an interview-administered food frequency questionnaire and provided blood samples. Food, water and blood Se levels were assessed by ICP-MS. Since Brazil nuts may also contain significant levels of barium (Ba) and strontium (Sr), these elements were likewise analyzed in nuts. The highest Se concentrations were found in Brazil nuts, but concentrations were highly variable (median: 13.9 {mu}g/g; range: 0.4-158.4 {mu}g/g). Chicken, game meat, eggs and beef also contained considerable levels of Se, with median concentrations from 0.3 to 1.4 {mu}g/g. There was no particular geographic distribution of food Se. Se concentration in drinking water was very low (< 1.4 {mu}g/L). Blood Se covered a (103-1500 {mu}g/L), and was positively related to regular consumption of Brazil nuts, domestic chicken and game meat. Brazil nuts were found to contain highly variable and often very high concentrations of Ba (88.0 {mu}g/g, 1.9-1437 {mu}g/g) and Sr (38.7 {mu}g/g, 3.3-173 {mu}g/g). Further studies should address multiple nutrient/toxic interactions in the diet and related effects on health.

  12. Elevated levels of selenium in the typical diet of Amazonian riverside populations

    International Nuclear Information System (INIS)

    Lemire, Melanie; Fillion, Myriam; Barbosa, Fernando; Guimaraes, Jean Remy Davee; Mergler, Donna

    2010-01-01

    Selenium (Se) intake is generally from food, whose Se content depends on soil Se and plant accumulation. For humans, adequate Se intake is essential for several selenoenzymes. In the Lower Tapajos region of the Brazilian Amazon, Se status is elevated with large inter-community variability. Se intake in this region, where Hg exposure is among the highest in the world, may be important to counteract mercury (Hg) toxicity. The present study was conducted in 2006 with 155 persons from four communities of the Lower Tapajos. The objectives were: i) to evaluate Se content in their typical diet and drinking water; ii) to compare food Se concentrations with respect to geographic location; and iii) to examine the contribution of consumption of different food items to blood Se. More than 400 local foods and 40 drinking water samples were collected. Participants responded to an interview-administered food frequency questionnaire and provided blood samples. Food, water and blood Se levels were assessed by ICP-MS. Since Brazil nuts may also contain significant levels of barium (Ba) and strontium (Sr), these elements were likewise analyzed in nuts. The highest Se concentrations were found in Brazil nuts, but concentrations were highly variable (median: 13.9 μg/g; range: 0.4-158.4 μg/g). Chicken, game meat, eggs and beef also contained considerable levels of Se, with median concentrations from 0.3 to 1.4 μg/g. There was no particular geographic distribution of food Se. Se concentration in drinking water was very low (< 1.4 μg/L). Blood Se covered a (103-1500 μg/L), and was positively related to regular consumption of Brazil nuts, domestic chicken and game meat. Brazil nuts were found to contain highly variable and often very high concentrations of Ba (88.0 μg/g, 1.9-1437 μg/g) and Sr (38.7 μg/g, 3.3-173 μg/g). Further studies should address multiple nutrient/toxic interactions in the diet and related effects on health.

  13. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W. L.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries

  14. Upgrades to the WIMS-ANL code

    International Nuclear Information System (INIS)

    Woodruff, W.L.; Leopando, L.S.

    1998-01-01

    The dusty old source code in WIMS-D4M has been completely rewritten to conform more closely with current FORTRAN coding practices. The revised code contains many improvements in appearance, error checking and in control of the output. The output is now tabulated to fit the typical 80 column window or terminal screen. The Segev method for resonance integral interpolation is now an option. Most of the dimension limitations have been removed and replaced with variable dimensions within a compile-time fixed container. The library is no longer restricted to the 69 energy group structure, and two new libraries have been generated for use with the code. The new libraries are both based on ENDF/B-VI data with one having the original 69 energy group structure and the second with a 172 group structure. The common source code can be used with PCs using both Windows 95 and NT, with a Linux based operating system and with UNIX based workstations. Comparisons of this version of the code to earlier evaluations with ENDF/B-V are provided, as well as, comparisons with the new libraries. (author)

  15. Treatment of isomers in nucleosynthesis codes

    Science.gov (United States)

    Reifarth, René; Fiebiger, Stefan; Göbel, Kathrin; Heftrich, Tanja; Kausch, Tanja; Köppchen, Christoph; Kurtulgil, Deniz; Langer, Christoph; Thomas, Benedikt; Weigand, Mario

    2018-03-01

    The decay properties of long-lived excited states (isomers) can have a significant impact on the destruction channels of isotopes under stellar conditions. In sufficiently hot environments, the population of isomers can be altered via thermal excitation or de-excitation. If the corresponding lifetimes are of the same order of magnitude as the typical time scales of the environment, the isomers have to be treated explicitly. We present a general approach to the treatment of isomers in stellar nucleosynthesis codes and discuss a few illustrative examples. The corresponding code is available online at http://exp-astro.de/isomers/.

  16. Code cases for implementing risk-based inservice testing in the ASME OM code

    International Nuclear Information System (INIS)

    Rowley, C.W.

    1996-01-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices

  17. Code cases for implementing risk-based inservice testing in the ASME OM code

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  18. Impacts of Model Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Athalye, Rahul A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sivaraman, Deepak [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Elliott, Douglas B. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Bartlett, Rosemarie [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-10-31

    The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO2 emissions at the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.

  19. Towards Qualifiable Code Generation from a Clocked Synchronous Subset of Modelica

    Directory of Open Access Journals (Sweden)

    Bernhard Thiele

    2015-01-01

    Full Text Available So far no qualifiable automatic code generators (ACGs are available for Modelica. Hence, digital control applications can be modeled and simulated in Modelica, but require tedious additional efforts (e.g., manual reprogramming to produce qualifiable target system production code. In order to more fully leverage the potential of a model-based development (MBD process in Modelica, a qualifiable automatic code generator is needed. Typical Modelica code generation is a fairly complex process which imposes a huge development burden to any efforts of tool qualification. This work aims at mapping a Modelica subset for digital control function development to a well-understood synchronous data-flow kernel language. This kernel language allows to resort to established compilation techniques for data-flow languages which are understood enough to be accepted by certification authorities. The mapping is established by providing a translational semantics from the Modelica subset to the synchronous data-flow kernel language. However, this translation turned out to be more intricate than initially expected and has given rise to several interesting issues that require suitable design decisions regarding the mapping and the language subset.

  20. LDPC Code Design for Nonuniform Power-Line Channels

    Directory of Open Access Journals (Sweden)

    Sanaei Ali

    2007-01-01

    Full Text Available We investigate low-density parity-check code design for discrete multitone channels over power lines. Discrete multitone channels are well modeled as nonuniform channels, that is, different bits experience various channel parameters. We propose a coding system for discrete multitone channels that allows for using a single code over a nonuniform channel. The number of code parameters for the proposed system is much greater than the number of code parameters in conventional channel. Therefore, search-based optimization methods are impractical. We first formulate the problem of optimizing the rate of an irregular low-density parity-check code, with guaranteed convergence over a general nonuniform channel, as an iterative linear programming which is significantly more efficient than search-based methods. Then we use this technique for a typical power-line channel. The methodology of this paper is directly applicable to all decoding algorithms for which a density evolution analysis is possible.

  1. An object-oriented scripting interface to a legacy electronic structure code

    DEFF Research Database (Denmark)

    Bahn, Sune Rastad; Jacobsen, Karsten Wedel

    2002-01-01

    The authors have created an object-oriented scripting interface to a mature density functional theory code. The interface gives users a high-level, flexible handle on the code without rewriting the underlying number-crunching code. The authors also discuss design issues and the advantages...

  2. A user input manual for single fuel rod behaviour analysis code FEMAXI-III

    International Nuclear Information System (INIS)

    Saito, Hiroaki; Yanagisawa, Kazuaki; Fujita, Misao.

    1983-03-01

    Principal objectives of Safety related research in connection with lighr water reactor fuel rods under normal operating condition are mainly addressed 1) to assess fuel integrity under steady state condition and 2) to generate initial condition under hypothetical accident. These assessments have to be relied principally upon steady state fuel behaviour computing code that is able to calculate fuel conditions to tbe occurred in a various manner. To achieve these objectives, efforts have been made to develope analytical computer code that calculates in-reactor fuel rod behaviour in best estimate manner. The computer code developed for the prediction of the long-term burnup response of single fuel rod under light water reactor condition is the third in a series of code versions:FEMAMI-III. The code calculates temperature, rod internal gas pressure, fission gas release and pellet-cladding interaction related rod deformation as a function of time-dependent fuel rod power and coolant boundary conditions. This document serves as a user input manual for the code FEMAMI-III which has opened to the public in year of 1982. A general description of the code input and output are included together with typical examples of input data. A detailed description of structures, analytical submodels and solution schemes in the code shall be given in the separate document to be published. (author)

  3. Purifying selection acts on coding and non-coding sequences of paralogous genes in Arabidopsis thaliana.

    Science.gov (United States)

    Hoffmann, Robert D; Palmgren, Michael

    2016-06-13

    Whole-genome duplications in the ancestors of many diverse species provided the genetic material for evolutionary novelty. Several models explain the retention of paralogous genes. However, how these models are reflected in the evolution of coding and non-coding sequences of paralogous genes is unknown. Here, we analyzed the coding and non-coding sequences of paralogous genes in Arabidopsis thaliana and compared these sequences with those of orthologous genes in Arabidopsis lyrata. Paralogs with lower expression than their duplicate had more nonsynonymous substitutions, were more likely to fractionate, and exhibited less similar expression patterns with their orthologs in the other species. Also, lower-expressed genes had greater tissue specificity. Orthologous conserved non-coding sequences in the promoters, introns, and 3' untranslated regions were less abundant at lower-expressed genes compared to their higher-expressed paralogs. A gene ontology (GO) term enrichment analysis showed that paralogs with similar expression levels were enriched in GO terms related to ribosomes, whereas paralogs with different expression levels were enriched in terms associated with stress responses. Loss of conserved non-coding sequences in one gene of a paralogous gene pair correlates with reduced expression levels that are more tissue specific. Together with increased mutation rates in the coding sequences, this suggests that similar forces of purifying selection act on coding and non-coding sequences. We propose that coding and non-coding sequences evolve concurrently following gene duplication.

  4. Comparison of WDM/Pulse-Position-Modulation (WDM/PPM) with Code/Pulse-Position-Swapping (C/PPS) Based on Wavelength/Time Codes

    Energy Technology Data Exchange (ETDEWEB)

    Mendez, A J; Hernandez, V J; Gagliardi, R M; Bennett, C V

    2009-06-19

    Pulse position modulation (PPM) signaling is favored in intensity modulated/direct detection (IM/DD) systems that have average power limitations. Combining PPM with WDM over a fiber link (WDM/PPM) enables multiple accessing and increases the link's throughput. Electronic bandwidth and synchronization advantages are further gained by mapping the time slots of PPM onto a code space, or code/pulse-position-swapping (C/PPS). The property of multiple bits per symbol typical of PPM can be combined with multiple accessing by using wavelength/time [W/T] codes in C/PPS. This paper compares the performance of WDM/PPM and C/PPS for equal wavelengths and bandwidth.

  5. Evaluation of two-fluid and drift flux thermohydraulics in APROS code environment

    International Nuclear Information System (INIS)

    Miettinen, J.; Karppinen, I.; Haenninen, M.; Ylijoki, J.

    1999-01-01

    The characteristics of the thermohydraulic solutions in APROS are considered for the nuclear power plant modelling. The thermohydraulic model of the APROS plant analyzer includes three levels of solutions, homogeneous 3-equation model, 5-equation drift flux model and 6-equation two-fluid model. In practical modelling of versatile process systems different approaches are selected for different types of the power plant sections. The 3-equation model is used for turbines and auxiliary systems. The 5-equation model and 6-equation model are alternative models for main process sections of the primary and secondary sides. The 5-equation model has been typically selected for the real time applications and the 6-equation model for the safety analysis applications. The validation needs for both approaches are the same. Because the change of the solution mode is an easy task in APROS, the validation tasks are typically performed in parallel for 5-equation and 6-equation models. By calculating in parallel with both models systematic errors in solutions may be pointed out. The testing against both separate effects tests and integral tests is an essential part in the thermohydraulics. In different plant applications different physical features are important. The analysis requirements vary from one application to another. When nodalizations together with increased computer speed are growing up, the earlier validation cases may be insufficient. That is why the content of the code has to be known in detail. Such an expertise in the code development has to be gained that properties of the code against other thermohydraulics codes are known. (author)

  6. Application of the integral code MELCOR for German NPPs and use within accident management and PSA projects

    International Nuclear Information System (INIS)

    Sonnenkalb, Martin

    2006-01-01

    The paper summarizes the application of MELCOR to German NPPS with PWR and BWR. A development of different code systems like ATHLET/ATHLET-CD, COCOSYS and ASTEC is done as well at GRS but it is not discussed in this paper. GRS has been using MELCOR since 1990 for real plant calculations. The results of MELCOR analyses are used mainly in PSA level 2 studies and in Accident Management projects for both types of NPPs. MELCOR has been a very useful and robust tool for these analyses. The calculations performed within the PSA level 2 studies for both types of German NPPs have shown that typical severe accident scenarios are characterized by several phases and that the consideration of plant specifics are important not only for realistic source term calculations. An overview of typically severe accident phases together with main accident management measures installed in German NPPs is presented in the paper. Several severe accident sequences have been calculated for both reactor types and some detailed nodalisation studies and code to code comparisons have been prepared in the past, to prove the developed core, reactor circuit and containment/building nodalisation schemes. Together with the compilation of the MELCOR data set, the qualification of the nodalisation schemes has been pursued with comparative calculations with detailed GRS codes for selected phases of severe accidents. The results of these comparative analyses showed in most of the areas a good agreement of essential parameters and of the general description of the plant behaviour during the accident progression. The in general detail of the German plant nodalisation schemes developed for MELCOR contributes significantly to this good agreement between integral and detailed code results. The implementation of MELCOR into the GRS simulator ATLAS was very important for the assessment of the results, not only due to the great detail of the nodalisation schemes used. It is used for training of severe accident

  7. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  8. Streamlining of the RELAP5-3D Code

    International Nuclear Information System (INIS)

    Mesina, George L; Hykes, Joshua; Guillen, Donna Post

    2007-01-01

    methodology employed follows Dijkstra's structured programming paradigm, which is based on splitting programs into sub-sections, each with single points of entry and exit and in which control is passed downward through the structure with no unconditional branches to higher levels. GO TO commands are typically avoided, since they alter the flow and control of a program's execution by allowing a jump from one place in the routine to another. The restructuring of RELAP5-3D subroutines is complicated by several issues. The first is use of code other than standard FORTRAN77. The second is restructuring limitations of FOR( ) STRUCT. The third is existence of pre-compiler directives and the complication of nested directives. Techniques were developed to overcome all these difficulties and more and these are reported. By implementing these developments, all subroutines of RELAP were restructured. Measures of code improvement relative to maintenance and development are presented

  9. Studies of fuel loading pattern optimization for a typical pressurized water reactor (PWR) using improved pivot particle swarm method

    International Nuclear Information System (INIS)

    Liu, Shichang; Cai, Jiejin

    2012-01-01

    Highlights: ► The mathematical model of loading pattern problems for PWR has been established. ► IPPSO was integrated with ‘donjon’ and ‘dragon’ into fuel arrangement optimizing code. ► The novel method showed highly efficiency for the LP problems. ► The core effective multiplication factor increases by about 10% in simulation cases. ► The power peaking factor decreases by about 0.6% in simulation cases. -- Abstract: An in-core fuel reload design tool using the improved pivot particle swarm method was developed for the loading pattern optimization problems in a typical PWR, such as Daya Bay Nuclear Power Plant. The discrete, multi-objective improved pivot particle swarm optimization, was integrated with the in-core physics calculation code ‘donjon’ based on finite element method, and assemblies’ group constant calculation code ‘dragon’, composing the optimization code for fuel arrangement. The codes of both ‘donjon’ and ‘dragon’ were programmed by Institute of Nuclear Engineering of Polytechnique Montréal, Canada. This optimization code was aiming to maximize the core effective multiplication factor (Keff), while keeping the local power peaking factor (Ppf) lower than a predetermined value to maintain fuel integrity. At last, the code was applied to the first cycle loading of Daya Bay Nuclear Power Plant. The result showed that, compared with the reference loading pattern design, the core effective multiplication factor increased by 9.6%, while the power peaking factor decreased by 0.6%, meeting the safety requirement.

  10. Monte Carlo codes and Monte Carlo simulator program

    International Nuclear Information System (INIS)

    Higuchi, Kenji; Asai, Kiyoshi; Suganuma, Masayuki.

    1990-03-01

    Four typical Monte Carlo codes KENO-IV, MORSE, MCNP and VIM have been vectorized on VP-100 at Computing Center, JAERI. The problems in vector processing of Monte Carlo codes on vector processors have become clear through the work. As the result, it is recognized that these are difficulties to obtain good performance in vector processing of Monte Carlo codes. A Monte Carlo computing machine, which processes the Monte Carlo codes with high performances is being developed at our Computing Center since 1987. The concept of Monte Carlo computing machine and its performance have been investigated and estimated by using a software simulator. In this report the problems in vectorization of Monte Carlo codes, Monte Carlo pipelines proposed to mitigate these difficulties and the results of the performance estimation of the Monte Carlo computing machine by the simulator are described. (author)

  11. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  12. A parallelization study of the general purpose Monte Carlo code MCNP4 on a distributed memory highly parallel computer

    International Nuclear Information System (INIS)

    Yamazaki, Takao; Fujisaki, Masahide; Okuda, Motoi; Takano, Makoto; Masukawa, Fumihiro; Naito, Yoshitaka

    1993-01-01

    The general purpose Monte Carlo code MCNP4 has been implemented on the Fujitsu AP1000 distributed memory highly parallel computer. Parallelization techniques developed and studied are reported. A shielding analysis function of the MCNP4 code is parallelized in this study. A technique to map a history to each processor dynamically and to map control process to a certain processor was applied. The efficiency of parallelized code is up to 80% for a typical practical problem with 512 processors. These results demonstrate the advantages of a highly parallel computer to the conventional computers in the field of shielding analysis by Monte Carlo method. (orig.)

  13. Benchmark problems for radiological assessment codes. Final report

    International Nuclear Information System (INIS)

    Mills, M.; Vogt, D.; Mann, B.

    1983-09-01

    This report describes benchmark problems to test computer codes used in the radiological assessment of high-level waste repositories. The problems presented in this report will test two types of codes. The first type of code calculates the time-dependent heat generation and radionuclide inventory associated with a high-level waste package. Five problems have been specified for this code type. The second code type addressed in this report involves the calculation of radionuclide transport and dose-to-man. For these codes, a comprehensive problem and two subproblems have been designed to test the relevant capabilities of these codes for assessing a high-level waste repository setting

  14. Selection of a computer code for Hanford low-level waste engineered-system performance assessment. Revision 1

    International Nuclear Information System (INIS)

    McGrail, B.P.; Bacon, D.H.

    1998-02-01

    Planned performance assessments for the proposed disposal of low-activity waste (LAW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. The available computer codes with suitable capabilities at the time Revision 0 of this document was prepared were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical processes expected to affect LAW glass corrosion and the mobility of radionuclides. This analysis was repeated in this report but updated to include additional processes that have been found to be important since Revision 0 was issued and to include additional codes that have been released. The highest ranked computer code was found to be the STORM code developed at PNNL for the US Department of Energy for evaluation of arid land disposal sites

  15. SWAAM-LT: The long-term, sodium/water reaction analysis method computer code

    International Nuclear Information System (INIS)

    Shin, Y.W.; Chung, H.H.; Wiedermann, A.H.; Tanabe, H.

    1993-01-01

    The SWAAM-LT Code, developed for analysis of long-term effects of sodium/water reactions, is discussed. The theoretical formulation of the code is described, including the introduction of system matrices for ease of computer programming as a general system code. Also, some typical results of the code predictions for available large scale tests are presented. Test data for the steam generator design with the cover-gas feature and without the cover-gas feature are available and analyzed. The capabilities and limitations of the code are then discussed in light of the comparison between the code prediction and the test data

  16. Potential of coded excitation in medical ultrasound imaging

    DEFF Research Database (Denmark)

    Misaridis, Athanasios; Gammelmark, Kim; Jørgensen, C. H.

    2000-01-01

    Improvement in SNR and/or penetration depth can be achieved in medical ultrasoundby using long coded waveforms, in a similar manner as in radars or sonars.However, the time-bandwidth product (TB) improvement, and thereby SNRimprovement is considerably lower in medical ultrasound, due...... codes have a larger bandwidth than the transducerin a typical medical ultrasound system can drive, a more careful code designhas been proven essential. Simulation results are also presented forcomparison.This paper presents an improved non-linear FM signal appropriatefor ultrasonic applications. The new...... coded waveform exhibits distinctfeatures, that make it very attractive in the implementation of codedultrasound systems. The range resolution that can be achieved is comparableto that of a conventional system, depending on the transducer's bandwidth andcan even be better for broad-band transducers...

  17. Thermal-hydraulic calculations for a fuel assembly in a European Pressurized Reactor using the RELAP5 code

    Directory of Open Access Journals (Sweden)

    Skrzypek Maciej

    2015-09-01

    Full Text Available The main object of interest was a typical fuel assembly, which constitutes a core of the nuclear reactor. The aim of the paper is to describe the phenomena and calculate thermal-hydraulic characteristic parameters in the fuel assembly for a European Pressurized Reactor (EPR. To perform thermal-hydraulic calculations, the RELAP5 code was used. This code allows to simulate steady and transient states for reactor applications. It is also an appropriate calculation tool in the event of a loss-of-coolant accident in light water reactors. The fuel assembly model with nodalization in the RELAP5 (Reactor Excursion and Leak Analysis Program code was presented. The calculations of two steady states for the fuel assembly were performed: the nominal steady-state conditions and the coolant flow rate decreased to 60% of the nominal EPR flow rate. The calculation for one transient state for a linearly decreasing flow rate of coolant was simulated until a new level was stabilized and SCRAM occurred. To check the correctness of the obtained results, the authors compared them against the reactor technical documentation available in the bibliography. The obtained results concerning steady states nearly match the design data. The hypothetical transient showed the importance of the need for correct cooling in the reactor during occurrences exceeding normal operation. The performed analysis indicated consequences of the coolant flow rate limitations during the reactor operation.

  18. Radiological analyses of intermediate and low level supercompacted waste drums by VQAD code

    International Nuclear Information System (INIS)

    Bace, M.; Trontl, K.; Gergeta, K.

    2004-01-01

    In order to increase the possibilities of the QAD-CGGP code, as well as to make the code more user friendly, modifications of the code have been performed. A general multisource option has been introduced into the code and a user friendly environment has been created through a Graphical User Interface. The improved version of the code has been used to calculate gamma dose rates of a single supercompacted waste drum and a pair of supercompacted waste drums. The results of the calculation were compared with the standard QAD-CGGP results. (author)

  19. Job coding (PCS 2003): feedback from a study conducted in an Occupational Health Service

    Science.gov (United States)

    Henrotin, Jean-Bernard; Vaissière, Monique; Etaix, Maryline; Malard, Stéphane; Dziurla, Mathieu; Lafon, Dominique

    2016-10-19

    Aim: To examine the quality of manual job coding carried out by occupational health teams with access to a software application that provides assistance in job and business sector coding (CAPS). Methods: Data from a study conducted in an Occupational Health Service were used to examine the first-level coding of 1,495 jobs by occupational health teams according to the French job classification entitled “PSC- Professions and socio-professional categories” (INSEE, 2003 version). A second level of coding was also performed by an experienced coder and the first and second level codes were compared. Agreement between the two coding systems was studied using the kappa coefficient (κ) and frequencies were compared by Chi2 tests. Results: Missing data or incorrect codes were observed for 14.5% of social groups (1 digit) and 25.7% of job codes (4 digits). While agreement between the first two levels of PCS 2003 appeared to be satisfactory (κ=0.73 and κ=0.75), imbalances in reassignment flows were effectively noted. The divergent job code rate was 48.2%. Variation in the frequency of socio-occupational variables was as high as 8.6% after correcting for missing data and divergent codes. Conclusions: Compared with other studies, the use of the CAPS tool appeared to provide effective coding assistance. However, our results indicate that job coding based on PSC 2003 should be conducted using ancillary data by personnel trained in the use of this tool.

  20. Mean of the typical decoding rates: a new translation efficiency index based on the analysis of ribosome profiling data.

    Science.gov (United States)

    Dana, Alexandra; Tuller, Tamir

    2014-12-01

    Gene translation modeling and prediction is a fundamental problem that has numerous biomedical implementations. In this work we present a novel, user-friendly tool/index for calculating the mean of the typical decoding rates that enables predicting translation elongation efficiency of protein coding genes for different tissue types, developmental stages, and experimental conditions. The suggested translation efficiency index is based on the analysis of the organism's ribosome profiling data. This index could be used for example to predict changes in translation elongation efficiency of lowly expressed genes that usually have relatively low and/or biased ribosomal densities and protein levels measurements, or can be used for example for predicting translation efficiency of new genetically engineered genes. We demonstrate the usability of this index via the analysis of six organisms in different tissues and developmental stages. Distributable cross platform application and guideline are available for download at: http://www.cs.tau.ac.il/~tamirtul/MTDR/MTDR_Install.html. Copyright © 2015 Dana and Tuller.

  1. A Novel Real-coded Quantum-inspired Genetic Algorithm and Its Application in Data Reconciliation

    Directory of Open Access Journals (Sweden)

    Gao Lin

    2012-06-01

    Full Text Available Traditional quantum-inspired genetic algorithm (QGA has drawbacks such as premature convergence, heavy computational cost, complicated coding and decoding process etc. In this paper, a novel real-coded quantum-inspired genetic algorithm is proposed based on interval division thinking. Detailed comparisons with some similar approaches for some standard benchmark functions test validity of the proposed algorithm. Besides, the proposed algorithm is used in two typical nonlinear data reconciliation problems (distilling process and extraction process and simulation results show its efficiency in nonlinear data reconciliation problems.

  2. Duals of Affine Grassmann Codes and Their Relatives

    DEFF Research Database (Denmark)

    Beelen, P.; Ghorpade, S. R.; Hoholdt, T.

    2012-01-01

    Affine Grassmann codes are a variant of generalized Reed-Muller codes and are closely related to Grassmann codes. These codes were introduced in a recent work by Beelen Here, we consider, more generally, affine Grassmann codes of a given level. We explicitly determine the dual of an affine...... Grassmann code of any level and compute its minimum distance. Further, we ameliorate the results by Beelen concerning the automorphism group of affine Grassmann codes. Finally, we prove that affine Grassmann codes and their duals have the property that they are linear codes generated by their minimum......-weight codewords. This provides a clean analogue of a corresponding result for generalized Reed-Muller codes....

  3. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  4. An approach for coupled-code multiphysics core simulations from a common input

    International Nuclear Information System (INIS)

    Schmidt, Rodney; Belcourt, Kenneth; Hooper, Russell; Pawlowski, Roger; Clarno, Kevin; Simunovic, Srdjan; Slattery, Stuart; Turner, John; Palmtag, Scott

    2015-01-01

    Highlights: • We describe an approach for coupled-code multiphysics reactor core simulations. • The approach can enable tight coupling of distinct physics codes with a common input. • Multi-code multiphysics coupling and parallel data transfer issues are explained. • The common input approach and how the information is processed is described. • Capabilities are demonstrated on an eigenvalue and power distribution calculation. - Abstract: This paper describes an approach for coupled-code multiphysics reactor core simulations that is being developed by the Virtual Environment for Reactor Applications (VERA) project in the Consortium for Advanced Simulation of Light-Water Reactors (CASL). In this approach a user creates a single problem description, called the “VERAIn” common input file, to define and setup the desired coupled-code reactor core simulation. A preprocessing step accepts the VERAIn file and generates a set of fully consistent input files for the different physics codes being coupled. The problem is then solved using a single-executable coupled-code simulation tool applicable to the problem, which is built using VERA infrastructure software tools and the set of physics codes required for the problem of interest. The approach is demonstrated by performing an eigenvalue and power distribution calculation of a typical three-dimensional 17 × 17 assembly with thermal–hydraulic and fuel temperature feedback. All neutronics aspects of the problem (cross-section calculation, neutron transport, power release) are solved using the Insilico code suite and are fully coupled to a thermal–hydraulic analysis calculated by the Cobra-TF (CTF) code. The single-executable coupled-code (Insilico-CTF) simulation tool is created using several VERA tools, including LIME (Lightweight Integrating Multiphysics Environment for coupling codes), DTK (Data Transfer Kit), Trilinos, and TriBITS. Parallel calculations are performed on the Titan supercomputer at Oak

  5. Psacoin level S intercomparison: An International code intercomparison exercise on a hypothetical safety assessment case study for radioactive waste disposal systems

    International Nuclear Information System (INIS)

    1993-06-01

    This report documents the Level S exercise of the Probabilistic System Assessment Group (PSAG). Level S is the fifth in a series of Probabilistic Code Intercomparison (PSACOIN) exercises designed to contribute to the verification of probabilistic codes and methodologies that may be used in assessing the safety of radioactive waste disposal systems and concepts. The focus of the Level S exercise lies on sensitivity analysis. Given a common data set of model output and input values the participants were asked to identify both the underlying model's most important parameters (deterministic sensitivity analysis) and the link between the distributions of the input and output values (distribution sensitivity analysis). Agreement was generally found where it was expected and the exercise has achieved its objectives in acting as a focus for testing and discussing sensitivity analysis issues. Among the outstanding issues that have been identified are: (i) that techniques for distribution sensitivity analysis are needed that avoid the problem of statistical noise; (ii) that further investigations are warranted on the most appropriate way of handling large numbers of effectively zero results generated by Monte Carlo sampling; and (iii) that methods need to be developed for demonstrating that the results of sensitivity analysis are indeed correct

  6. OPR1000 RCP Flow Coastdown Analysis using SPACE Code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong-Hyuk; Kim, Seyun [KHNP CRI, Daejeon (Korea, Republic of)

    2016-10-15

    The Korean nuclear industry developed a thermal-hydraulic analysis code for the safety analysis of PWRs, named SPACE(Safety and Performance Analysis Code for Nuclear Power Plant). Current loss of flow transient analysis of OPR1000 uses COAST code to calculate transient RCS(Reactor Coolant System) flow. The COAST code calculates RCS loop flow using pump performance curves and RCP(Reactor Coolant Pump) inertia. In this paper, SPACE code is used to reproduce RCS flowrates calculated by COAST code. The loss of flow transient is transient initiated by reduction of forced reactor coolant circulation. Typical loss of flow transients are complete loss of flow(CLOF) and locked rotor(LR). OPR1000 RCP flow coastdown analysis was performed using SPACE using simplified nodalization. Complete loss of flow(4 RCP trip) was analyzed. The results show good agreement with those from COAST code, which is CE code for calculating RCS flow during loss of flow transients. Through this study, we confirmed that SPACE code can be used instead of COAST code for RCP flow coastdown analysis.

  7. A method for modeling co-occurrence propensity of clinical codes with application to ICD-10-PCS auto-coding.

    Science.gov (United States)

    Subotin, Michael; Davis, Anthony R

    2016-09-01

    Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Towers of generalized divisible quantum codes

    Science.gov (United States)

    Haah, Jeongwan

    2018-04-01

    A divisible binary classical code is one in which every code word has weight divisible by a fixed integer. If the divisor is 2ν for a positive integer ν , then one can construct a Calderbank-Shor-Steane (CSS) code, where X -stabilizer space is the divisible classical code, that admits a transversal gate in the ν th level of Clifford hierarchy. We consider a generalization of the divisibility by allowing a coefficient vector of odd integers with which every code word has zero dot product modulo the divisor. In this generalized sense, we construct a CSS code with divisor 2ν +1 and code distance d from any CSS code of code distance d and divisor 2ν where the transversal X is a nontrivial logical operator. The encoding rate of the new code is approximately d times smaller than that of the old code. In particular, for large d and ν ≥2 , our construction yields a CSS code of parameters [[O (dν -1) ,Ω (d ) ,d ] ] admitting a transversal gate at the ν th level of Clifford hierarchy. For our construction we introduce a conversion from magic state distillation protocols based on Clifford measurements to those based on codes with transversal T gates. Our tower contains, as a subclass, generalized triply even CSS codes that have appeared in so-called gauge fixing or code switching methods.

  9. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  10. Development of a thermal-hydraulic code for reflood analysis in a PWR experimental loop

    International Nuclear Information System (INIS)

    Alves, Sabrina P.; Mesquita, Amir Z.; Rezende, Hugo C.; Palma, Daniel A.P.

    2017-01-01

    A process of fundamental importance in the event of Loss of Coolant Accident (LOCA) in Pressurized Water nuclear Reactors (PWR) is the reflood of the core or rewetting of nuclear fuels. The Nuclear Technology Development Center (CDTN) has been developing since the 70’s programs to allow Brazil to become independent in the field of reactor safety analysis. To that end, in the 80’s was designed, assembled and commissioned one Rewetting Test Facility (ITR in Portuguese). This facility aims to investigate the phenomena involved in the thermal hydraulic reflood phase of a Loss of Coolant Accident in a PWR nuclear reactor. This work aim is the analysis of physical and mathematical models governing the rewetting phenomenon, and the development a thermo-hydraulic simulation code of a representative experimental circuit of the PWR reactors core cooling channels. It was possible to elaborate and develop a code called REWET. The results obtained with REWET were compared with the experimental results of the ITR, and with the results of the Hydroflut code, that was the old program previously used. An analysis was made of the evolution of the wall temperature of the test section as well as the evolution of the front for two typical tests using the two codes calculation, and experimental results. The result simulated by REWET code for the rewetting time also came closer to the experimental results more than those calculated by Hydroflut code. (author)

  11. Development of a thermal-hydraulic code for reflood analysis in a PWR experimental loop

    Energy Technology Data Exchange (ETDEWEB)

    Alves, Sabrina P.; Mesquita, Amir Z.; Rezende, Hugo C., E-mail: sabrinapral@gmail.com, E-mail: amir@cdtn.brm, E-mail: hcr@cdtn.br, E-mail: hcr@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Palma, Daniel A.P., E-mail: dapalma@cnen.gov.br [Comissão Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    A process of fundamental importance in the event of Loss of Coolant Accident (LOCA) in Pressurized Water nuclear Reactors (PWR) is the reflood of the core or rewetting of nuclear fuels. The Nuclear Technology Development Center (CDTN) has been developing since the 70’s programs to allow Brazil to become independent in the field of reactor safety analysis. To that end, in the 80’s was designed, assembled and commissioned one Rewetting Test Facility (ITR in Portuguese). This facility aims to investigate the phenomena involved in the thermal hydraulic reflood phase of a Loss of Coolant Accident in a PWR nuclear reactor. This work aim is the analysis of physical and mathematical models governing the rewetting phenomenon, and the development a thermo-hydraulic simulation code of a representative experimental circuit of the PWR reactors core cooling channels. It was possible to elaborate and develop a code called REWET. The results obtained with REWET were compared with the experimental results of the ITR, and with the results of the Hydroflut code, that was the old program previously used. An analysis was made of the evolution of the wall temperature of the test section as well as the evolution of the front for two typical tests using the two codes calculation, and experimental results. The result simulated by REWET code for the rewetting time also came closer to the experimental results more than those calculated by Hydroflut code. (author)

  12. Theory of Mind experience sampling in typical adults.

    Science.gov (United States)

    Bryant, Lauren; Coffey, Anna; Povinelli, Daniel J; Pruett, John R

    2013-09-01

    We explored the frequency with which typical adults make Theory of Mind (ToM) attributions, and under what circumstances these attributions occur. We used an experience sampling method to query 30 typical adults about their everyday thoughts. Participants carried a Personal Data Assistant (PDA) that prompted them to categorize their thoughts as Action, Mental State, or Miscellaneous at approximately 30 pseudo-random times during a continuous 10-h period. Additionally, participants noted the direction of their thought (self versus other) and degree of socializing (with people versus alone) at the time of inquiry. We were interested in the relative frequency of ToM (mental state attributions) and how prominent they were in immediate social exchanges. Analyses of multiple choice answers suggest that typical adults: (1) spend more time thinking about actions than mental states and miscellaneous things, (2) exhibit a higher degree of own- versus other-directed thought when alone, and (3) make mental state attributions more frequently when not interacting (offline) than while interacting with others (online). A significant 3-way interaction between thought type, direction of thought, and socializing emerged because action but not mental state thoughts about others occurred more frequently when participants were interacting with people versus when alone; whereas there was an increase in the frequency of both action and mental state attributions about the self when participants were alone as opposed to socializing. A secondary analysis of coded free text responses supports findings 1-3. The results of this study help to create a more naturalistic picture of ToM use in everyday life and the method shows promise for future study of typical and atypical thought processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Theoretical Atomic Physics code development IV: LINES, A code for computing atomic line spectra

    International Nuclear Information System (INIS)

    Abdallah, J. Jr.; Clark, R.E.H.

    1988-12-01

    A new computer program, LINES, has been developed for simulating atomic line emission and absorption spectra using the accurate fine structure energy levels and transition strengths calculated by the (CATS) Cowan Atomic Structure code. Population distributions for the ion stages are obtained in LINES by using the Local Thermodynamic Equilibrium (LTE) model. LINES is also useful for displaying the pertinent atomic data generated by CATS. This report describes the use of LINES. Both CATS and LINES are part of the Theoretical Atomic PhysicS (TAPS) code development effort at Los Alamos. 11 refs., 9 figs., 1 tab

  14. Comparing Levels of Mastery Motivation in Children with Cerebral Palsy (CP) and Typically Developing Children.

    Science.gov (United States)

    Salavati, Mahyar; Vameghi, Roshanak; Hosseini, Seyed Ali; Saeedi, Ahmad; Gharib, Masoud

    2018-02-01

    The present study aimed to compare motivation in school-age children with CP and typically developing children. 229 parents of children with cerebral palsy and 212 parents of typically developing children participated in the present cross sectional study and completed demographic and DMQ18 forms. The rest of information was measured by an occupational therapist. Average age was equal to 127.12±24.56 months for children with cerebral palsy (CP) and 128.08±15.90 for typically developing children. Independent t-test used to compare two groups; and Pearson correlation coefficient by SPSS software applied to study correlation with other factors. There were differences between DMQ subscales of CP and typically developing groups in terms of all subscales ( P Manual ability classification system (r=-0.782, P<0.001) and cognitive impairment (r=-0.161, P<0.05). Children with CP had lower mastery motivation than typically developing children. Rehabilitation efforts should take to enhance motivation, so that children felt empowered to do tasks or practices.

  15. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  16. Material characteristics and construction methods for a typical research reactor concrete containment in Iran

    International Nuclear Information System (INIS)

    Ebrahimia, Mahsa; Suha, Kune Y.; Eghbalic, Rahman; Jahan, Farzaneh Asadi malek

    2012-01-01

    Generally selecting an appropriate material and also construction style for a concrete containment due to its function and special geometry play an important role in applicability and also construction cost and duration decrease in a research reactor (RR) project. The reactor containment enclosing the reactor vessel comprises physical barriers reflecting the safety design and construction codes, regulations and standards so as to prevent the community and the environment from uncontrolled release of radioactive materials. It is the third and the last barrier against radioactivity release. It protects the reactor vessel from such external events as earthquake and aircraft crash as well. Thus, it should be designed and constructed in such a manner as to withstand dead and live loads, ground and seismic loads, missiles and aircraft loads, and thermal and shrinkage loads. This study aims to present a construction method for concrete containment of a typical RR in Iran. The work also presents an acceptable characteristic for concrete and reinforcing re bar of a typical concrete containment. The current study has evaluated the various types of the RR containments. The most proper type was selected in accordance with the current knowledge and technology of Iran

  17. Material characteristics and construction methods for a typical research reactor concrete containment in Iran

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimia, Mahsa; Suha, Kune Y. [Seoul National Univ., Seoul (Korea, Republic of); Eghbalic, Rahman; Jahan, Farzaneh Asadi malek [School of Architecture and Urbanism, Qazvin (Iran, Islamic Republic of)

    2012-10-15

    Generally selecting an appropriate material and also construction style for a concrete containment due to its function and special geometry play an important role in applicability and also construction cost and duration decrease in a research reactor (RR) project. The reactor containment enclosing the reactor vessel comprises physical barriers reflecting the safety design and construction codes, regulations and standards so as to prevent the community and the environment from uncontrolled release of radioactive materials. It is the third and the last barrier against radioactivity release. It protects the reactor vessel from such external events as earthquake and aircraft crash as well. Thus, it should be designed and constructed in such a manner as to withstand dead and live loads, ground and seismic loads, missiles and aircraft loads, and thermal and shrinkage loads. This study aims to present a construction method for concrete containment of a typical RR in Iran. The work also presents an acceptable characteristic for concrete and reinforcing re bar of a typical concrete containment. The current study has evaluated the various types of the RR containments. The most proper type was selected in accordance with the current knowledge and technology of Iran.

  18. MARS 1.3 system analysis code coupling with CONTEMPT4/MOD5/PCCS containment analysis code using dynamic link library

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Lee, Won Jae

    1998-01-01

    The two independent codes, MARS 1.3 and CONTEMPT4/MOD5/PCCS, have been coupled using the method of dynamic-link-library (DLL) technique. Overall configuration of the code system is designed so that MARS will be a main driver program which use CONTEMPT as associated routines. Using Digital Visual Fortran compiler, DLL was generated from the CONTEMPT source code with the interfacing routine names and arguments. Coupling of MARS with CONTEMPT was realized by calling the DLL routines at the appropriate step in the MARS code. Verification of coupling was carried out for LBLOCA transient of a typical plant design. It was found that the DLL technique is much more convenient than the UNIX process control techniques and effective for Window operating system. Since DLL can be used by more than one application and an application program can use many DLLs simultaneously, this technique would enable the existing codes to use more broadly with linking others

  19. MARS 1.3 system analysis code coupling with CONTEMPT4/MOD5/PCCS containment analysis code using dynamic link library

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bub Dong; Jeong, Jae Jun; Lee, Won Jae [KAERI, Taejon (Korea, Republic of)

    1998-10-01

    The two independent codes, MARS 1.3 and CONTEMPT4/MOD5/PCCS, have been coupled using the method of dynamic-link-library (DLL) technique. Overall configuration of the code system is designed so that MARS will be a main driver program which use CONTEMPT as associated routines. Using Digital Visual Fortran compiler, DLL was generated from the CONTEMPT source code with the interfacing routine names and arguments. Coupling of MARS with CONTEMPT was realized by calling the DLL routines at the appropriate step in the MARS code. Verification of coupling was carried out for LBLOCA transient of a typical plant design. It was found that the DLL technique is much more convenient than the UNIX process control techniques and effective for Window operating system. Since DLL can be used by more than one application and an application program can use many DLLs simultaneously, this technique would enable the existing codes to use more broadly with linking others.

  20. CALMAR: A New Versatile Code Library for Adjustment from Measurements

    Directory of Open Access Journals (Sweden)

    Grégoire G.

    2016-01-01

    Full Text Available CALMAR, a new library for adjustment has been developed. This code performs simultaneous shape and level adjustment of an initial prior spectrum from measured reactions rates of activation foils. It is written in C++ using the ROOT data analysis framework,with all linear algebra classes. STAYSL code has also been reimplemented in this library. Use of the code is very flexible : stand-alone, inside a C++ code, or driven by scripts. Validation and test cases are under progress. Theses cases will be included in the code package that will be available to the community. Future development are discussed. The code should support the new Generalized Nuclear Data (GND format. This new format has many advantages compared to ENDF.

  1. Decision-making in schizophrenia: A predictive-coding perspective.

    Science.gov (United States)

    Sterzer, Philipp; Voss, Martin; Schlagenhauf, Florian; Heinz, Andreas

    2018-05-31

    Dysfunctional decision-making has been implicated in the positive and negative symptoms of schizophrenia. Decision-making can be conceptualized within the framework of hierarchical predictive coding as the result of a Bayesian inference process that uses prior beliefs to infer states of the world. According to this idea, prior beliefs encoded at higher levels in the brain are fed back as predictive signals to lower levels. Whenever these predictions are violated by the incoming sensory data, a prediction error is generated and fed forward to update beliefs encoded at higher levels. Well-documented impairments in cognitive decision-making support the view that these neural inference mechanisms are altered in schizophrenia. There is also extensive evidence relating the symptoms of schizophrenia to aberrant signaling of prediction errors, especially in the domain of reward and value-based decision-making. Moreover, the idea of altered predictive coding is supported by evidence for impaired low-level sensory mechanisms and motor processes. We review behavioral and neural findings from these research areas and provide an integrated view suggesting that schizophrenia may be related to a pervasive alteration in predictive coding at multiple hierarchical levels, including cognitive and value-based decision-making processes as well as sensory and motor systems. We relate these findings to decision-making processes and propose that varying degrees of impairment in the implicated brain areas contribute to the variety of psychotic experiences. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. A Typical Verification Challenge for the GRID

    NARCIS (Netherlands)

    van de Pol, Jan Cornelis; Bal, H. E.; Brim, L.; Leucker, M.

    2008-01-01

    A typical verification challenge for the GRID community is presented. The concrete challenge is to implement a simple recursive algorithm for finding the strongly connected components in a graph. The graph is typically stored in the collective memory of a number of computers, so a distributed

  3. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  4. Edge-preserving Intra Depth Coding based on Context-coding and H.264/AVC

    DEFF Research Database (Denmark)

    Zamarin, Marco; Salmistraro, Matteo; Forchhammer, Søren

    2013-01-01

    Depth map coding plays a crucial role in 3D Video communication systems based on the “Multi-view Video plus Depth” representation as view synthesis performance is strongly affected by the accuracy of depth information, especially at edges in the depth map image. In this paper an efficient algorithm...... for edge-preserving intra depth compression based on H.264/AVC is presented. The proposed method introduces a new Intra mode specifically targeted to depth macroblocks with arbitrarily shaped edges, which are typically not efficiently represented by DCT. Edge macroblocks are partitioned into two regions...... each approximated by a flat surface. Edge information is encoded by means of contextcoding with an adaptive template. As a novel element, the proposed method allows exploiting the edge structure of previously encoded edge macroblocks during the context-coding step to further increase compression...

  5. The WIMS familly of codes

    International Nuclear Information System (INIS)

    Askew, J.

    1981-01-01

    WIMS-D4 is the latest version of the original form of the Winfrith Improved Multigroup Scheme, developed in 1963-5 for lattice calculations on all types of thermal reactor, whether moderated by graphite, heavy or light water. The code, in earlier versions, has been available from the NEA code centre for a number of years in both IBM and CDC dialects of FORTRAN. An important feature of this code was its rapid, accurate deterministic system for treating resonance capture in heavy nuclides, and capable of dealing with both regular pin lattices and with cluster geometries typical of pressure tube and gas cooled reactors. WIMS-E is a compatible code scheme in which each calcultation step is bounded by standard interfaces on disc or tape. The interfaces contain files of information in a standard form, restricted to numbers representing physically meaningful quantities such as cross-sections and fluxes. Restriction of code intercommunication to this channel limits the possible propagation of errors. A module is capable of transforming WIMS-D output into the standard interface form and hence the two schemes can be linked if required. LWR-WIMS was developed in 1970 as a method of calculating LWR reloads for the fuel fabricators BNFL/GUNF. It uses the WIMS-E library and a number of the same module

  6. Breathing (and Coding?) a Bit Easier: Changes to International Classification of Disease Coding for Pulmonary Hypertension.

    Science.gov (United States)

    Mathai, Stephen C; Mathew, Sherin

    2018-04-20

    International Classification of Disease (ICD) coding system is broadly utilized by healthcare providers, hospitals, healthcare payers, and governments to track health trends and statistics at the global, national, and local levels and to provide a reimbursement framework for medical care based upon diagnosis and severity of illness. The current iteration of the ICD system, ICD-10, was implemented in 2015. While many changes to the prior ICD-9 system were included in the ICD-10 system, the newer revision failed to adequately reflect advances in the clinical classification of certain diseases such as pulmonary hypertension (PH). Recently, a proposal to modify the ICD-10 codes for PH was considered and ultimately adopted for inclusion as updates to ICD-10 coding system. While these revisions better reflect the current clinical classification of PH, in the future, further changes should be considered to improve the accuracy and ease of coding for all forms of PH. Copyright © 2018. Published by Elsevier Inc.

  7. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  8. VHBORE: A code to compute borehole fluid conductivity profiles with pressure changes in the borehole

    International Nuclear Information System (INIS)

    Hale, F.V.; Tsang, C.F.

    1994-06-01

    This report describes the code VHBORE which can be used to model fluid electric conductivity profiles in a borehole intersecting fractured rock under conditions of changing pressure in the well bore. Pressure changes may be due to water level variations caused by pumping or fluid density effects as formation fluid is drawn into the borehole. Previous reports describe the method of estimating the hydrologic behavior of fractured rock using a time series of electric conductivity logs and an earlier code, BORE, to generate electric conductivity logs under constant pressure and flow rate conditions. The earlier model, BORE, assumed a constant flow rate, q i , for each inflow into the well bore. In the present code the user supplies the location, constant pressure, h i , transmissivity, T i , and storativity, S i , for each fracture, as well as the initial water level in the well, h w (0), In addition, the input data contains changes in the water level at later times, Δh w (t), typically caused by turning a pump on or off. The variable density calculation also requires input of the density of each of the inflow fluids, ρ i , and the initial uniform density of the well bore fluid, ρ w (0). These parameters are used to compute the flow rate for each inflow point at each time step. The numerical method of Jacob and Lohman (1952) is used to compute the flow rate into or out of the fractures based on the changes in pressure in the wellbore. A dimensionless function relates flow rate as a function of time in response to an imposed pressure change. The principle of superposition is used to determine the net flow rate from a time series of pressure changes. Additional reading on the relationship between drawdown and flow rate can be found in Earlougher (1977), particularly his Section 4.6, open-quotes Constant-Pressure Flow Testingclose quotes

  9. State of art in FE-based fuel performance codes

    International Nuclear Information System (INIS)

    Kim, Hyo Chan; Yang, Yong Sik; Kim, Dae Ho; Bang, Je Geon; Kim, Sun Ki; Koo, Yang Hyun

    2013-01-01

    Finite element (FE) method that is reliable and proven solution in mechanical field has been introduced into fuel performance codes for multidimensional analysis. The present state of the art in numerical simulation of FE-based fuel performance predominantly involves 2-D axisymmetric model and 3-D volumetric model. The FRAPCON and FRAPTRAN own 1.5-D and 2-D FE model to simulate PCMI and cladding ballooning. In 2-D simulation, the FALCON code, developed by EPRI, is a 2-D (R-Z and R-θ) fully thermal-mechanically coupled steady-state and transient FE-based fuel behavior code. The French codes TOUTATIS and ALCYONE which are 3-D, and typically used to investigate localized behavior. In 2008, the Idaho National Laboratory (INL) has been developing multidimensional (2-D and 3-D) nuclear fuel performance code called BISON. In this paper, the current state of FE-based fuel performance code and their models are presented. Based on investigation into the codes, requirements and direction of development for new FE-based fuel performance code can be discussed. Based on comparison of models in FE-based fuel performance code, status of art in the codes can be discussed. A new FE-based fuel performance code should include typical pellet and cladding models which all codes own. In particular, specified pellet and cladding model such as gaseous swelling and high burnup structure (HBS) model should be developed to improve accuracy of code as well as consider AC condition. To reduce computation cost, the approximated gap and the optimized contact model should be also developed. Nuclear fuel operates in an extreme environment that induces complex multiphysics phenomena, occurring over distances ranging from inter-atomic spacing to meters, and times scales ranging from microseconds to years. This multiphysics behavior is often tightly coupled, a well known example being the thermomechanical behavior. Adding to this complexity, important aspects of fuel behavior are inherently

  10. New quantum codes derived from a family of antiprimitive BCH codes

    Science.gov (United States)

    Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin

    The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.

  11. SATURN-FS 1: A computer code for thermo-mechanical fuel rod analysis

    International Nuclear Information System (INIS)

    Ritzhaupt-Kleissl, H.J.; Heck, M.

    1993-09-01

    The SATURN-FS code was written as a general revision of the SATURN-2 code. SATURN-FS is capable to perform a complete thermomechanical analysis of a fuel pin, with all thermal, mechanical and irradiation-based effects. Analysis is possible for LWR and for LMFBR fuel pins. The thermal analysis consists of calculations of the temperature profile in fuel, gap and in the cladding. Pore migration, stoichiometry change of oxide fuel, gas release and diffusion effects are taken into account. The mechanical modeling allows the non steady-state analysis of elastic and nonelastic fuel pin behaviour, such as creep, strain hardening, recovery and stress relaxation. Fuel cracking and healing is taken into account as well as contact and friction between fuel and cladding. The modeling of the irradiation effects comprises swelling and fission gas production, Pu-migration and irradiation induced creep. The code structure, the models and the requirements for running the code are described in the report. Recommendations for the application are given. Program runs for verification and typical examples of application are given in the last part of this report. (orig.) [de

  12. Monte Carlo based radial shield design of typical PWR reactor

    Energy Technology Data Exchange (ETDEWEB)

    Gul, Anas; Khan, Rustam; Qureshi, M. Ayub; Azeem, Muhammad Waqar; Raza, S.A. [Pakistan Institute of Engineering and Applied Sciences, Islamabad (Pakistan). Dept. of Nuclear Engineering; Stummer, Thomas [Technische Univ. Wien (Austria). Atominst.

    2016-11-15

    Neutron and gamma flux and dose equivalent rate distribution are analysed in radial and shields of a typical PWR type reactor based on the Monte Carlo radiation transport computer code MCNP5. The ENDF/B-VI continuous energy cross-section library has been employed for the criticality and shielding analysis. The computed results are in good agreement with the reference results (maximum difference is less than 56 %). It implies that MCNP5 a good tool for accurate prediction of neutron and gamma flux and dose rates in radial shield around the core of PWR type reactors.

  13. Array coding for large data memories

    Science.gov (United States)

    Tranter, W. H.

    1982-01-01

    It is pointed out that an array code is a convenient method for storing large quantities of data. In a typical application, the array consists of N data words having M symbols in each word. The probability of undetected error is considered, taking into account three symbol error probabilities which are of interest, and a formula for determining the probability of undetected error. Attention is given to the possibility of reading data into the array using a digital communication system with symbol error probability p. Two different schemes are found to be of interest. The conducted analysis of array coding shows that the probability of undetected error is very small even for relatively large arrays.

  14. Opponent Coding of Sound Location (Azimuth) in Planum Temporale is Robust to Sound-Level Variations.

    Science.gov (United States)

    Derey, Kiki; Valente, Giancarlo; de Gelder, Beatrice; Formisano, Elia

    2016-01-01

    Coding of sound location in auditory cortex (AC) is only partially understood. Recent electrophysiological research suggests that neurons in mammalian auditory cortex are characterized by broad spatial tuning and a preference for the contralateral hemifield, that is, a nonuniform sampling of sound azimuth. Additionally, spatial selectivity decreases with increasing sound intensity. To accommodate these findings, it has been proposed that sound location is encoded by the integrated activity of neuronal populations with opposite hemifield tuning ("opponent channel model"). In this study, we investigated the validity of such a model in human AC with functional magnetic resonance imaging (fMRI) and a phase-encoding paradigm employing binaural stimuli recorded individually for each participant. In all subjects, we observed preferential fMRI responses to contralateral azimuth positions. Additionally, in most AC locations, spatial tuning was broad and not level invariant. We derived an opponent channel model of the fMRI responses by subtracting the activity of contralaterally tuned regions in bilateral planum temporale. This resulted in accurate decoding of sound azimuth location, which was unaffected by changes in sound level. Our data thus support opponent channel coding as a neural mechanism for representing acoustic azimuth in human AC. © The Author 2015. Published by Oxford University Press.

  15. The ELOCA fuel modelling code: past, present and future

    International Nuclear Information System (INIS)

    Williams, A.F.

    2005-01-01

    ELOCA is the Industry Standard Toolset (IST) computer code for modelling CANDU fuel under the transient coolant conditions typical of an accident scenario. Since its original inception in the early 1970's, the code has undergone continual development and improvement. The code now embodies much of the knowledge and experience of fuel behaviour gained by the Canadian nuclear industry over this period. ELOCA has proven to be a valuable tool for the safety analyst, and continues to be used extensively to support the licensing cases of CANDU reactors. This paper provides a brief and much simplified view of this development history, its current status, and plans for future development. (author)

  16. Design specifications for ASME B and PV Code Section III nuclear class 1 piping

    International Nuclear Information System (INIS)

    Richardson, J.A.

    1978-01-01

    ASME B and PV Code Section III code regulations for nuclear piping requires that a comprehensive Design Specification be developed for ensuring that the design and installation of the piping meets all code requirements. The intent of this paper is to describe the code requirements, discuss the implementation of these requirements in a typical Class 1 piping design specification, and to report on recent piping failures in operating light water nuclear power plants in the US. (author)

  17. SolTrace: A Ray-Tracing Code for Complex Solar Optical Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wendelin, Tim [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lewandowski, Allan [Allan Lewandowski Solar Consulting LLC, Evergreen, CO (United States)

    2013-10-01

    SolTrace is an optical simulation tool designed to model optical systems used in concentrating solar power (CSP) applications. The code was first written in early 2003, but has seen significant modifications and changes since its inception, including conversion from a Pascal-based software development platform to C++. SolTrace is unique in that it can model virtually any optical system utilizingthe sun as the source. It has been made available for free and as such is in use worldwide by industry, universities, and research laboratories. The fundamental design of the code is discussed, including enhancements and improvements over the earlier version. Comparisons are made with other optical modeling tools, both non-commercial and commercial in nature. Finally, modeled results are shownfor some typical CSP systems and, in one case, compared to measured optical data.

  18. Comparison of the sand liquefaction estimated based on codes and practical earthquake damage phenomena

    Science.gov (United States)

    Fang, Yi; Huang, Yahong

    2017-12-01

    Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.

  19. Gamma streaming experiments for validation of Monte Carlo code

    International Nuclear Information System (INIS)

    Thilagam, L.; Mohapatra, D.K.; Subbaiah, K.V.; Iliyas Lone, M.; Balasubramaniyan, V.

    2012-01-01

    In-homogeneities in shield structures lead to considerable amount of leakage radiation (streaming) increasing the radiation levels in accessible areas. Development works on experimental as well as computational methods for quantifying this streaming radiation are still continuing. Monte Carlo based radiation transport code, MCNP is usually a tool for modeling and analyzing such problems involving complex geometries. In order to validate this computational method for streaming analysis, it is necessary to carry out some experimental measurements simulating these inhomogeneities like ducts and voids present in the bulk shields for typical cases. The data thus generated will be analysed by simulating the experimental set up employing MCNP code and optimized input parameters for the code in finding solutions for similar radiation streaming problems will be formulated. Comparison of experimental data obtained from radiation streaming experiments through ducts will give a set of thumb rules and analytical fits for total radiation dose rates within and outside the duct. The present study highlights the validation of MCNP code through the gamma streaming experiments carried out with the ducts of various shapes and dimensions. Over all, the present study throws light on suitability of MCNP code for the analysis of gamma radiation streaming problems for all duct configurations considered. In the present study, only dose rate comparisons have been made. Studies on spectral comparison of streaming radiation are in process. Also, it is planned to repeat the experiments with various shield materials. Since the penetrations and ducts through bulk shields are unavoidable in an operating nuclear facility the results on this kind of radiation streaming simulations and experiments will be very useful in the shield structure optimization without compromising the radiation safety

  20. SIMPLE-2: a computer code for calculation of steady-state thermal behavior of rod bundles with flow sweeping

    International Nuclear Information System (INIS)

    Jones, O.C. Jr.; Yao, S.; Henry, R.E.

    1976-01-01

    A computer code has been developed for use in making single-phase thermal hydraulic calculations in rod bundle arrays with flow sweeping due to spiral wraps as the predominant crossflow mixing effect. This code, called SIMPLE-2, makes the assumption that the axial pressure gradient is identical for each subchannel over a given axial increment, and is unique in that no empirical coefficients must be specified for its use. Results from this code have been favorably compared with experimental data for both uniform and highly nonuniform power distributions. Typical calculations for various bundle sizes applicable to the LMBR program are included

  1. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  2. On the Delay Characteristics for Point-to-Point links using Random Linear Network Coding with On-the-fly Coding Capabilities

    DEFF Research Database (Denmark)

    Tömösközi, Máté; Fitzek, Frank; Roetter, Daniel Enrique Lucani

    2014-01-01

    Video surveillance and similar real-time applications on wireless networks require increased reliability and high performance of the underlying transmission layer. Classical solutions, such as Reed-Solomon codes, increase the reliability, but typically have the negative side-effect of additional ...

  3. Entanglement-assisted quantum quasicyclic low-density parity-check codes

    Science.gov (United States)

    Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor

    2009-03-01

    We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.

  4. PRESTO-II: a low-level waste environmental transport and risk assessment code

    International Nuclear Information System (INIS)

    Fields, D.E.; Emerson, C.J.; Chester, R.O.; Little, C.A.; Hiromoto, G.

    1986-04-01

    PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed for the evaluation of possible health effects from shallow-land and, waste-disposal trenches. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include ground-water transport, overland flow, erosion, surface water dilution, suspension, atmospheric transport, deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses, as well as doses to the intruder and farmer, may be calculated. Cumulative health effects in terms of cancer deaths are calculated for the population over the 1000-year period using a life-table approach. Data are included for three example sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York. A code listing and example input for each of the three sites are included in the appendices to this report

  5. PRESTO-II: a low-level waste environmental transport and risk assessment code

    Energy Technology Data Exchange (ETDEWEB)

    Fields, D.E.; Emerson, C.J.; Chester, R.O.; Little, C.A.; Hiromoto, G.

    1986-04-01

    PRESTO-II (Prediction of Radiation Effects from Shallow Trench Operations) is a computer code designed for the evaluation of possible health effects from shallow-land and, waste-disposal trenches. The model is intended to serve as a non-site-specific screening model for assessing radionuclide transport, ensuing exposure, and health impacts to a static local population for a 1000-year period following the end of disposal operations. Human exposure scenarios considered include normal releases (including leaching and operational spillage), human intrusion, and limited site farming or reclamation. Pathways and processes of transit from the trench to an individual or population include ground-water transport, overland flow, erosion, surface water dilution, suspension, atmospheric transport, deposition, inhalation, external exposure, and ingestion of contaminated beef, milk, crops, and water. Both population doses and individual doses, as well as doses to the intruder and farmer, may be calculated. Cumulative health effects in terms of cancer deaths are calculated for the population over the 1000-year period using a life-table approach. Data are included for three example sites: Barnwell, South Carolina; Beatty, Nevada; and West Valley, New York. A code listing and example input for each of the three sites are included in the appendices to this report.

  6. Content layer progressive coding of digital maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2000-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the WWW. Progressive encoding is achieved by separating the image into content layers based on other predefined information. Information from...... already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bi-level coding, context collapsing methods for multi-level images and arithmetic coding. Relative pixel patterns are used to collapse contexts. The number of contexts are analyzed....... The new methods outperform existing coding schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 60-70% on our layered test images....

  7. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    Science.gov (United States)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  8. Synthesizer for decoding a coded short wave length irradiation

    International Nuclear Information System (INIS)

    1976-01-01

    The system uses point irradiation source, typically an X-ray emitter, which illuminates a three dimensional object consisting of a set of parallel planes, each of which acts as a source of coded information. The secondary source images are superimposed on a common flat screen. The decoding system comprises an imput light-screen detector, a picture screen amplifier, a beam deflector, on output picture screen, an optical focussing unit including three lenses, a masking unit, an output light screen detector and a video signal reproduction unit of cathode ray tube from, or similar, to create a three dimensional image of the object. (G.C.)

  9. A New Prime Code for Synchronous Optical Code Division Multiple-Access Networks

    Directory of Open Access Journals (Sweden)

    Huda Saleh Abbas

    2018-01-01

    Full Text Available A new spreading code based on a prime code for synchronous optical code-division multiple-access networks that can be used in monitoring applications has been proposed. The new code is referred to as “extended grouped new modified prime code.” This new code has the ability to support more terminal devices than other prime codes. In addition, it patches subsequences with “0s” leading to lower power consumption. The proposed code has an improved cross-correlation resulting in enhanced BER performance. The code construction and parameters are provided. The operating performance, using incoherent on-off keying modulation and incoherent pulse position modulation systems, has been analyzed. The performance of the code was compared with other prime codes. The results demonstrate an improved performance, and a BER floor of 10−9 was achieved.

  10. ANTHEM: a two-dimensional multicomponent self-consistent hydro-electron transport code for laser-matter interaction studies

    International Nuclear Information System (INIS)

    Mason, R.J.

    1982-01-01

    The ANTHEM code for the study of CO 2 -laser-generated transport is outlined. ANTHEM treats the background plasma as coupled Eulerian thermal and ion fluids, and the suprathermal electrons as either a third fluid or a body of evolving collisional PIC particles. The electrons scatter off the ions; the suprathermals drag against the thermal background. Self-consistent E- and B-fields are computed by the Implicit Moment Method. The current status of the code is described. Typical output from ANTHEM is discussed with special application to Augmented-Return-Current CO 2 -laser-driven targets

  11. CORA. A thermal and hydraulic transient analysis computer code for a cluster of reactor core assemblies

    International Nuclear Information System (INIS)

    Johnson, H.G.

    1982-01-01

    The Fast Flux Test Facility (FFTF) is arranged for natural circulation emergency core cooling in the event of loss of all plant electrical power. This design feature was conclusively demonstrated in a series of four natural circulation transient tests during the plant startup testing program in 1980 and 1981. Predictions, of core performance during these tests were made using the Westinghouse Hanford Company CORA computer program. The predictions, which compared well with measured plant data, were used in the extrapolation process to demonstrate the validity of the FFTF plant safety models and codes. This paper provides a brief description of the CORA code and includes typical comparisons of predictions to measured plant test data

  12. Verification of the CONPAS (CONtainment Performance Analysis System) code package

    International Nuclear Information System (INIS)

    Kim, See Darl; Ahn, Kwang Il; Song, Yong Man; Choi, Young; Park, Soo Yong; Kim, Dong Ha; Jin, Young Ho.

    1997-09-01

    CONPAS is a computer code package to integrate the numerical, graphical, and results-oriented aspects of Level 2 probabilistic safety assessment (PSA) for nuclear power plants under a PC window environment automatically. For the integrated analysis of Level 2 PSA, the code utilizes four distinct, but closely related modules: (1) ET Editor, (2) Computer, (3) Text Editor, and (4) Mechanistic Code Plotter. Compared with other existing computer codes for Level 2 PSA, and CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, sensitivity analysis and data interpretation, reporting aspects including tabling and graphic as well as user-friendly interface. The computational performance of CONPAS has been verified through a Level 2 PSA to a reference plant. The results of the CONPAS code was compared with an existing level 2 PSA code (NUCAP+) and the comparison proves that CONPAS is appropriate for Level 2 PSA. (author). 9 refs., 8 tabs., 14 figs

  13. Comparison of different LMFBR primary containment codes applied to a Benchmark problem

    International Nuclear Information System (INIS)

    Benuzzi, A.

    1986-01-01

    The Cont Benchmark calculation exercise is a project sponsored by the Containment Loading and Response Group, a subgroup of the Safety Working Group of the Fast Reactor Coordinating Committee - CEC. A full-size typical Pool type LMFBR undergoing a postulated Core Disruptive Accident (CDA) has been defined by Belgonucleaire-Brussels under a study contract financed by the CEC and has been submitted to seven containment code calculations. The results of these calculations are presented and discussed in this paper

  14. Determinants of Occupational Gender Segregation : Work Values and Gender (A)Typical Occupational Preferences of Adolescents

    OpenAIRE

    Busch, Anne

    2011-01-01

    The study examines micro-level determinants of the occupational gender segregation, analyzing work values and their effects on gender (a)typical occupational preferences of adolescents. Human capital theory assumes that women develop higher preferences for a good work/life balance in youth, whereas men develop higher extrinsic work values. Socialization theory predicts that female adolescents form higher preferences for social work content. This gender typicality in work values is expected to...

  15. SPIDERMAN: Fast code to simulate secondary transits and phase curves

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2017-11-01

    SPIDERMAN calculates exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. The code uses a geometrical algorithm to solve exactly the area of sections of the disc of the planet that are occulted by the star. Approximately 1000 models can be generated per second in typical use, which makes making Markov Chain Monte Carlo analyses practicable. The code is modular and allows comparison of the effect of multiple different brightness distributions for a dataset.

  16. Assessing sea-level rise impact on saltwater intrusion into the root zone of a geo-typical area in coastal east-central Florida.

    Science.gov (United States)

    Xiao, Han; Wang, Dingbao; Medeiros, Stephen C; Hagen, Scott C; Hall, Carlton R

    2018-07-15

    Saltwater intrusion (SWI) into root zone in low-lying coastal areas can affect the survival and spatial distribution of various vegetation species by altering plant communities and the wildlife habitats they support. In this study, a baseline model was developed based on FEMWATER to simulate the monthly variation of root zone salinity of a geo-typical area located at the Cape Canaveral Barrier Island Complex (CCBIC) of coastal east-central Florida (USA) in 2010. Based on the developed and calibrated baseline model, three diagnostic FEMWATER models were developed to predict the extent of SWI into root zone by modifying the boundary values representing the rising sea level based on various sea-level rise (SLR) scenarios projected for 2080. The simulation results indicated that the extent of SWI would be insignificant if SLR is either low (23.4cm) or intermediate (59.0cm), but would be significant if SLR is high (119.5cm) in that infiltration/diffusion of overtopping seawater in coastal low-lying areas can greatly increase root zone salinity level, since the sand dunes may fail to prevent the landward migration of seawater because the waves of the rising sea level can reach and pass over the crest under high (119.5cm) SLR scenario. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  18. On Coding Non-Contiguous Letter Combinations

    Directory of Open Access Journals (Sweden)

    Frédéric eDandurand

    2011-06-01

    Full Text Available Starting from the hypothesis that printed word identification initially involves the parallel mapping of visual features onto location-specific letter identities, we analyze the type of information that would be involved in optimally mapping this location-specific orthographic code onto a location-invariant lexical code. We assume that some intermediate level of coding exists between individual letters and whole words, and that this involves the representation of letter combinations. We then investigate the nature of this intermediate level of coding given the constraints of optimality. This intermediate level of coding is expected to compress data while retaining as much information as possible about word identity. Information conveyed by letters is a function of how much they constrain word identity and how visible they are. Optimization of this coding is a combination of minimizing resources (using the most compact representations and maximizing information. We show that in a large proportion of cases, non-contiguous letter sequences contain more information than contiguous sequences, while at the same time requiring less precise coding. Moreover, we found that the best predictor of human performance in orthographic priming experiments was within-word ranking of conditional probabilities, rather than average conditional probabilities. We conclude that from an optimality perspective, readers learn to select certain contiguous and non-contiguous letter combinations as information that provides the best cue to word identity.

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. Manchester Coding Option for SpaceWire: Providing Choices for System Level Design

    Science.gov (United States)

    Rakow, Glenn; Kisin, Alex

    2014-01-01

    This paper proposes an optional coding scheme for SpaceWire in lieu of the current Data Strobe scheme for three reasons. First reason is to provide a straightforward method for electrical isolation of the interface; secondly to provide ability to reduce the mass and bend radius of the SpaceWire cable; and thirdly to provide a means for a common physical layer over which multiple spacecraft onboard data link protocols could operate for a wide range of data rates. The intent is to accomplish these goals without significant change to existing SpaceWire design investments. The ability to optionally use Manchester coding in place of the current Data Strobe coding provides the ability to DC balanced the signal transitions unlike the SpaceWire Data Strobe coding; and therefore the ability to isolate the electrical interface without concern. Additionally, because the Manchester code has the clock and data encoded on the same signal, the number of wires of the existing SpaceWire cable could be optionally reduced by 50. This reduction could be an important consideration for many users of SpaceWire as indicated by the already existing effort underway by the SpaceWire working group to reduce the cable mass and bend radius by elimination of shields. However, reducing the signal count by half would provide even greater gains. It is proposed to restrict the data rate for the optional Manchester coding to a fixed data rate of 10 Megabits per second (Mbps) in order to make the necessary changes simple and still able to run in current radiation tolerant Field Programmable Gate Arrays (FPGAs). Even with this constraint, 10 Mbps will meet many applications where SpaceWire is used. These include command and control applications and many instruments applications with have moderate data rate. For most NASA flight implementations, SpaceWire designs are in rad-tolerant FPGAs, and the desire to preserve the heritage design investment is important for cost and risk considerations. The

  1. The Coding Process and Its Challenges

    Directory of Open Access Journals (Sweden)

    Judith A. Holton, Ph.D.

    2010-02-01

    Full Text Available Coding is the core process in classic grounded theory methodology. It is through coding that the conceptual abstraction of data and its reintegration as theory takes place. There are two types of coding in a classic grounded theory study: substantive coding, which includes both open and selective coding procedures, and theoretical coding. In substantive coding, the researcher works with the data directly, fracturing and analysing it, initially through open coding for the emergence of a core category and related concepts and then subsequently through theoretical sampling and selective coding of data to theoretically saturate the core and related concepts. Theoretical saturation is achieved through constant comparison of incidents (indicators in the data to elicit the properties and dimensions of each category (code. This constant comparing of incidents continues until the process yields the interchangeability of indicators, meaning that no new properties or dimensions are emerging from continued coding and comparison. At this point, the concepts have achieved theoretical saturation and the theorist shifts attention to exploring the emergent fit of potential theoretical codes that enable the conceptual integration of the core and related concepts to produce hypotheses that account for relationships between the concepts thereby explaining the latent pattern of social behaviour that forms the basis of the emergent theory. The coding of data in grounded theory occurs in conjunction with analysis through a process of conceptual memoing, capturing the theorist’s ideation of the emerging theory. Memoing occurs initially at the substantive coding level and proceeds to higher levels of conceptual abstraction as coding proceeds to theoretical saturation and the theorist begins to explore conceptual reintegration through theoretical coding.

  2. System Level Evaluation of Innovative Coded MIMO-OFDM Systems for Broadcasting Digital TV

    Directory of Open Access Journals (Sweden)

    Y. Nasser

    2008-01-01

    Full Text Available Single-frequency networks (SFNs for broadcasting digital TV is a topic of theoretical and practical interest for future broadcasting systems. Although progress has been made in the characterization of its description, there are still considerable gaps in its deployment with MIMO technique. The contribution of this paper is multifold. First, we investigate the possibility of applying a space-time (ST encoder between the antennas of two sites in SFN. Then, we introduce a 3D space-time-space block code for future terrestrial digital TV in SFN architecture. The proposed 3D code is based on a double-layer structure designed for intercell and intracell space time-coded transmissions. Eventually, we propose to adapt a technique called effective exponential signal-to-noise ratio (SNR mapping (EESM to predict the bit error rate (BER at the output of the channel decoder in the MIMO systems. The EESM technique as well as the simulations results will be used to doubly check the efficiency of our 3D code. This efficiency is obtained for equal and unequal received powers whatever is the location of the receiver by adequately combining ST codes. The 3D code is then a very promising candidate for SFN architecture with MIMO transmission.

  3. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  4. A parallel code named NEPTUNE for 3D fully electromagnetic and pic simulations

    International Nuclear Information System (INIS)

    Dong Ye; Yang Wenyuan; Chen Jun; Zhao Qiang; Xia Fang; Ma Yan; Xiao Li; Sun Huifang; Chen Hong; Zhou Haijing; Mao Zeyao; Dong Zhiwei

    2010-01-01

    A parallel code named NEPTUNE for 3D fully electromagnetic and particle-in-cell (PIC) simulations is introduced, which could run on the Linux system with hundreds to thousand CPUs. NEPTUNE is suitable to simulate entire 3D HPM devices; many HPM devices are simulated and designed by using it. In NEPTUNE code, the electromagnetic fields are updated by using the finite-difference in time domain (FDTD) method of solving Maxwell equations and the particles are moved by using Buneman-Boris advance method of solving relativistic Newton-Lorentz equation. Electromagnetic fields and particles are coupled by using liner weighing interpolation PIC method, and the electric filed components are corrected by using Boris method of solve Poisson equation in order to ensure charge-conservation. NEPTUNE code could construct many complicated geometric structures, such as arbitrary axial-symmetric structures, plane transforming structures, slow-wave-structures, coupling holes, foils, and so on. The boundary conditions used in NEPTUNE code are introduced in brief, including perfectly electric conductor boundary, external wave boundary, and particle boundary. Finally, some typical HPM devices are simulated and test by using NEPTUNE code, including MILO, RBWO, VCO, and RKA. The simulation results are with correct and credible physical images, and the parallel efficiencies are also given. (authors)

  5. Microfluidic CODES: a scalable multiplexed electronic sensor for orthogonal detection of particles in microfluidic channels.

    Science.gov (United States)

    Liu, Ruxiu; Wang, Ningquan; Kamili, Farhan; Sarioglu, A Fatih

    2016-04-21

    Numerous biophysical and biochemical assays rely on spatial manipulation of particles/cells as they are processed on lab-on-a-chip devices. Analysis of spatially distributed particles on these devices typically requires microscopy negating the cost and size advantages of microfluidic assays. In this paper, we introduce a scalable electronic sensor technology, called microfluidic CODES, that utilizes resistive pulse sensing to orthogonally detect particles in multiple microfluidic channels from a single electrical output. Combining the techniques from telecommunications and microfluidics, we route three coplanar electrodes on a glass substrate to create multiple Coulter counters producing distinct orthogonal digital codes when they detect particles. We specifically design a digital code set using the mathematical principles of Code Division Multiple Access (CDMA) telecommunication networks and can decode signals from different microfluidic channels with >90% accuracy through computation even if these signals overlap. As a proof of principle, we use this technology to detect human ovarian cancer cells in four different microfluidic channels fabricated using soft lithography. Microfluidic CODES offers a simple, all-electronic interface that is well suited to create integrated, low-cost lab-on-a-chip devices for cell- or particle-based assays in resource-limited settings.

  6. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    Science.gov (United States)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  7. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  8. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  9. Development of parallel Fokker-Planck code ALLAp

    International Nuclear Information System (INIS)

    Batishcheva, A.A.; Sigmar, D.J.; Koniges, A.E.

    1996-01-01

    We report on our ongoing development of the 3D Fokker-Planck code ALLA for a highly collisional scrape-off-layer (SOL) plasma. A SOL with strong gradients of density and temperature in the spatial dimension is modeled. Our method is based on a 3-D adaptive grid (in space, magnitude of the velocity, and cosine of the pitch angle) and a second order conservative scheme. Note that the grid size is typically 100 x 257 x 65 nodes. It was shown in our previous work that only these capabilities make it possible to benchmark a 3D code against a spatially-dependent self-similar solution of a kinetic equation with the Landau collision term. In the present work we show results of a more precise benchmarking against the exact solutions of the kinetic equation using a new parallel code ALLAp with an improved method of parallelization and a modified boundary condition at the plasma edge. We also report first results from the code parallelization using Message Passing Interface for a Massively Parallel CRI T3D platform. We evaluate the ALLAp code performance versus the number of T3D processors used and compare its efficiency against a Work/Data Sharing parallelization scheme and a workstation version

  10. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  11. Politicas de uniformes y codigos de vestuario (Uniforms and Dress-Code Policies). ERIC Digest.

    Science.gov (United States)

    Lumsden, Linda

    This digest in Spanish examines schools' dress-code policies and discusses the legal considerations and research findings about the effects of such changes. Most revisions to dress codes involve the use of uniforms, typically as a way to curb school violence and create a positive learning environment. A recent survey of secondary school principals…

  12. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  13. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    International Nuclear Information System (INIS)

    Wan, C.; Cao, L.; Wu, H.; Zu, T.; Shen, W.

    2015-01-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  14. Code development of total sensitivity and uncertainty analysis for reactor physics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Wan, C.; Cao, L.; Wu, H.; Zu, T., E-mail: chenghuiwan@stu.xjtu.edu.cn, E-mail: caolz@mail.xjtu.edu.cn, E-mail: hongchun@mail.xjtu.edu.cn, E-mail: tiejun@mail.xjtu.edu.cn [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Shen, W., E-mail: Wei.Shen@cnsc-ccsn.gc.ca [Xi' an Jiaotong Univ., School of Nuclear Science and Technology, Xi' an (China); Canadian Nuclear Safety Commission, Ottawa, ON (Canada)

    2015-07-01

    Sensitivity and uncertainty analysis are essential parts for reactor system to perform risk and policy analysis. In this study, total sensitivity and corresponding uncertainty analysis for responses of neutronics calculations have been accomplished and developed the S&U analysis code named UNICORN. The UNICORN code can consider the implicit effects of multigroup cross sections on the responses. The UNICORN code has been applied to typical pin-cell case in this paper, and can be proved correct by comparison the results with those of the TSUNAMI-1D code. (author)

  15. POWER LEVEL EFFECT IN A PWR ROD EJECTION ACCIDENT

    International Nuclear Information System (INIS)

    Diamond, D.J.; Bromley, B.P.; Aronson, A.L.

    2002-01-01

    The purpose of this study is to determine the effect of the initial power level during a rod ejection accident (REA) on the ejected rod worth and the resulting energy deposition in the fuel. The model used is for the hot zero power (HZP) conditions at the end of a typical fuel cycle for the Three Mile Island Unit 1 pressurized water reactor. PARCS , a transient, three-dimensional, two-group neutron nodal diffusion code, coupled with its own thermal-hydraulics model, is used to perform both steady-state and transient simulations. The worth of an ejected control rod is affected by both power level, and the positions of control banks. As the power level is increased, the worth of a single central control rod tends to drop due to thermal-hydraulic feedback and control bank removal, both of which flatten the radial neutron flux and power distributions. Although the peak fuel pellet enthalpy rise during an REA will be greater for a given ejected rod worth at elevated initial power levels, it is more likely the HZP condition will cause a greater net energy deposition because an ejected rod will have the highest worth at HZP. Thus, the HZP condition can be considered the most conservative in a safety evaluation

  16. Enhanced air dispersion modelling at a typical Chinese nuclear power plant site: Coupling RIMPUFF with two advanced diagnostic wind models.

    Science.gov (United States)

    Liu, Yun; Li, Hong; Sun, Sida; Fang, Sheng

    2017-09-01

    An enhanced air dispersion modelling scheme is proposed to cope with the building layout and complex terrain of a typical Chinese nuclear power plant (NPP) site. In this modelling, the California Meteorological Model (CALMET) and the Stationary Wind Fit and Turbulence (SWIFT) are coupled with the Risø Mesoscale PUFF model (RIMPUFF) for refined wind field calculation. The near-field diffusion coefficient correction scheme of the Atmospheric Relative Concentrations in the Building Wakes Computer Code (ARCON96) is adopted to characterize dispersion in building arrays. The proposed method is evaluated by a wind tunnel experiment that replicates the typical Chinese NPP site. For both wind speed/direction and air concentration, the enhanced modelling predictions agree well with the observations. The fraction of the predictions within a factor of 2 and 5 of observations exceeds 55% and 82% respectively in the building area and the complex terrain area. This demonstrates the feasibility of the new enhanced modelling for typical Chinese NPP sites. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Development of standards, codes of practice and guidelines at the national level

    International Nuclear Information System (INIS)

    Swindon, T.N.

    1989-01-01

    Standards, codes of practice and guidelines are defined and their different roles in radiation protection specified. The work of the major bodies that develop such documents in Australia - the National Health and Medical Research Council and the Standards Association of Australia - is discussed. The codes of practice prepared under the Environment Protection (Nuclear Codes) Act, 1978, an act of the Australian Federal Parliament, are described and the guidelines associated with them outlined. 5 refs

  18. On the Performance of the Cache Coding Protocol

    Directory of Open Access Journals (Sweden)

    Behnaz Maboudi

    2018-03-01

    Full Text Available Network coding approaches typically consider an unrestricted recoding of coded packets in the relay nodes to increase performance. However, this can expose the system to pollution attacks that cannot be detected during transmission, until the receivers attempt to recover the data. To prevent these attacks while allowing for the benefits of coding in mesh networks, the cache coding protocol was proposed. This protocol only allows recoding at the relays when the relay has received enough coded packets to decode an entire generation of packets. At that point, the relay node recodes and signs the recoded packets with its own private key, allowing the system to detect and minimize the effect of pollution attacks and making the relays accountable for changes on the data. This paper analyzes the delay performance of cache coding to understand the security-performance trade-off of this scheme. We introduce an analytical model for the case of two relays in an erasure channel relying on an absorbing Markov chain and an approximate model to estimate the performance in terms of the number of transmissions before successfully decoding at the receiver. We confirm our analysis using simulation results. We show that cache coding can overcome the security issues of unrestricted recoding with only a moderate decrease in system performance.

  19. Development of a dose assessment computer code for the NPP severe accident at intermediate level - Korean case

    International Nuclear Information System (INIS)

    Cheong, J.H.; Lee, K.J.; Cho, H.Y.; Lim, J.H.

    1993-01-01

    A real-time dose assessment computer code named RADCON (RADiological CONsequence analysis) has been developed. An approximation method describing the distribution of radionuclides in a puff was proposed and implemented in the code. This method is expected to reduce the time required to calculate the cloud shine (external dose from radioactive plumes). RADCON can simulate an NPP emergency situation by considering complex topography and continuous washout phenomena and provide a function of effective emergency planning. To verify the code results, RADCON has been compared with RASCAL, which was developed for the U.S. NRC by ORNL, for eight hypothetical accident scenarios. Sensitivity analysis was also performed for the important input parameters. (2 tabs., 3 figs.)

  20. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  1. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders—attention to detail and systemizing—may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings. PMID:27242491

  2. Systemizers are better code-breakers:Self-reported systemizing predicts code-breaking performance in expert hackers and naïve participants

    Directory of Open Access Journals (Sweden)

    India eHarvey

    2016-05-01

    Full Text Available Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders - attention to detail and systemizing - may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e. crypto-analysis or code-breaking. A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001; Baron-Cohen et al., 2003. They were also tested with behavioural tasks involving code-breaking and a control task involving security x-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing was related with performance in the x-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015. We discuss the theoretical and translational implications of our findings.

  3. Systemizers Are Better Code-Breakers: Self-Reported Systemizing Predicts Code-Breaking Performance in Expert Hackers and Naïve Participants.

    Science.gov (United States)

    Harvey, India; Bolgan, Samuela; Mosca, Daniel; McLean, Colin; Rusconi, Elena

    2016-01-01

    Studies on hacking have typically focused on motivational aspects and general personality traits of the individuals who engage in hacking; little systematic research has been conducted on predispositions that may be associated not only with the choice to pursue a hacking career but also with performance in either naïve or expert populations. Here, we test the hypotheses that two traits that are typically enhanced in autism spectrum disorders-attention to detail and systemizing-may be positively related to both the choice of pursuing a career in information security and skilled performance in a prototypical hacking task (i.e., crypto-analysis or code-breaking). A group of naïve participants and of ethical hackers completed the Autism Spectrum Quotient, including an attention to detail scale, and the Systemizing Quotient (Baron-Cohen et al., 2001, 2003). They were also tested with behavioral tasks involving code-breaking and a control task involving security X-ray image interpretation. Hackers reported significantly higher systemizing and attention to detail than non-hackers. We found a positive relation between self-reported systemizing (but not attention to detail) and code-breaking skills in both hackers and non-hackers, whereas attention to detail (but not systemizing) was related with performance in the X-ray screening task in both groups, as previously reported with naïve participants (Rusconi et al., 2015). We discuss the theoretical and translational implications of our findings.

  4. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  5. On the Coded Packet Relay Network in the Presence of Neighbors:Benefits of Speaking in a Crowded Room

    OpenAIRE

    Khamfroush, Hana; Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Hundebøll, Martin; Fitzek, Frank

    2014-01-01

    This paper studies the problem of optimal use of a relay for reducing the transmission time of data packets from a source to a destination using network coding. More importantly, we address an effect that is typically overlooked in previous studies: the presence of active transmitting nodes in the neighborhood of such devices, which is typical in wireless mesh networks. We show that in systems with a fair medium access control mechanism (MAC), the use of a relay in a crowded medium brings for...

  6. The influence of gender and gender typicality on autobiographical memory across event types and age groups.

    Science.gov (United States)

    Grysman, Azriel; Fivush, Robyn; Merrill, Natalie A; Graci, Matthew

    2016-08-01

    Gender differences in autobiographical memory emerge in some data collection paradigms and not others. The present study included an extensive analysis of gender differences in autobiographical narratives. Data were collected from 196 participants, evenly split by gender and by age group (emerging adults, ages 18-29, and young adults, ages 30-40). Each participant reported four narratives, including an event that had occurred in the last 2 years, a high point, a low point, and a self-defining memory. Additionally, all participants completed self-report measures of masculine and feminine gender typicality. The narratives were coded along six dimensions-namely coherence, connectedness, agency, affect, factual elaboration, and interpretive elaboration. The results indicated that females expressed more affect, connection, and factual elaboration than males across all narratives, and that feminine typicality predicted increased connectedness in narratives. Masculine typicality predicted higher agency, lower connectedness, and lower affect, but only for some narratives and not others. These findings support an approach that views autobiographical reminiscing as a feminine-typed activity and that identifies gender differences as being linked to categorical gender, but also to one's feminine gender typicality, whereas the influences of masculine gender typicality were more context-dependent. We suggest that implicit gendered socialization and more explicit gender typicality each contribute to gendered autobiographies.

  7. Fourier spectral of PalmCode as descriptor for palmprint recognition

    NARCIS (Netherlands)

    Ruan, Qiuqi; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Mu, Meiru

    Study on automatic person recognition by palmprint is currently a hot topic. In this paper, we propose a novel palmprint recognition method by transforming the typical palmprint phase code feature into its Fourier frequency domain. The resulting real-valued Fourier spectral features are further

  8. 41 CFR 102-36.35 - What is the typical process for disposing of excess personal property?

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What is the typical... agency property or by obtaining excess property from other federal agencies in lieu of new procurements... eligible non-federal activities. Title 40 of the United States Code requires that surplus personal property...

  9. Ultrasound strain imaging using Barker code

    Science.gov (United States)

    Peng, Hui; Tie, Juhong; Guo, Dequan

    2017-01-01

    Ultrasound strain imaging is showing promise as a new way of imaging soft tissue elasticity in order to help clinicians detect lesions or cancers in tissues. In this paper, Barker code is applied to strain imaging to improve its quality. Barker code as a coded excitation signal can be used to improve the echo signal-to-noise ratio (eSNR) in ultrasound imaging system. For the Baker code of length 13, the sidelobe level of the matched filter output is -22dB, which is unacceptable for ultrasound strain imaging, because high sidelobe level will cause high decorrelation noise. Instead of using the conventional matched filter, we use the Wiener filter to decode the Barker-coded echo signal to suppress the range sidelobes. We also compare the performance of Barker code and the conventional short pulse in simulation method. The simulation results demonstrate that the performance of the Wiener filter is much better than the matched filter, and Baker code achieves higher elastographic signal-to-noise ratio (SNRe) than the short pulse in low eSNR or great depth conditions due to the increased eSNR with it.

  10. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  11. Kinetic parameters evaluation of PWRs using static cell and core calculation codes

    International Nuclear Information System (INIS)

    Jahanbin, Ali; Malmir, Hessam

    2012-01-01

    Highlights: ► In this study, we have calculated effective delayed neutron fraction and prompt neutron lifetime in PWRs. ► New software has been developed to link the WIMS, BORGES and CITATION codes in Visual C computer programming language. ► This software is used for calculation of the kinetic parameters in a typical VVER-1000 and NOK Beznau reactor. ► The ratios ((β eff ) i )/((β eff ) core ) , which are the important input data for the reactivity accident analysis, are also calculated. - Abstract: In this paper, evaluation of the kinetic parameters (effective delayed neutron fraction and prompt neutron lifetime) in PWRs, using static cell and core calculation codes, is reported. A new software has been developed to link the WIMS, BORGES and CITATION codes in Visual C computer programming language. Using the WIMS cell calculation code, multigroup microscopic cross-sections and number densities of different materials can be generated in a binary file. By the use of BORGES code, these binary-form cross-sections and number densities are converted to a format readable by the CITATION core calculation code, by which the kinetic parameters can be finally obtained. This software is used for calculation of the kinetic parameters in a typical VVER-1000 and NOK Beznau reactor. The ratios ((β eff ) i )/((β eff ) core ) , which are the important input data for the reactivity accident analysis, are also calculated. Benchmarking of the results against the final safety analysis report (FSAR) of the aforementioned reactors shows very good agreements with these published documents.

  12. Sequential egocentric navigation and reliance on landmarks in Williams syndrome and typical development

    Directory of Open Access Journals (Sweden)

    Hannah eBroadbent

    2015-02-01

    Full Text Available Visuospatial difficulties in Williams syndrome (WS are well documented. Recently, research has shown that spatial difficulties in WS extend to large-scale space, particularly in coding space using an allocentric frame of reference. Typically developing (TD children and adults predominantly rely on the use of a sequential egocentric strategy to navigate a large-scale route (retracing a sequence of left-right body turns. The aim of this study was to examine whether individuals with WS are able to employ a sequential egocentric strategy to guide learning and the retracing of a route. Forty-eight TD children, aged 5, 7 and 9 years and 18 participants with WS were examined on their ability to learn and retrace routes in two (6-turn virtual environment mazes (with and without landmarks. The ability to successfully retrace a route following the removal of landmarks (use of sequential egocentric coding was also examined.Although in line with TD 5 year-olds when learning a route with landmarks, individuals with WS showed significantly greater detriment when these landmarks were removed, relative to all TD groups. Moreover, the WS group made significantly more errors than all TD groups when learning a route that never contained landmarks. On a perceptual view-matching task, results revealed a high level of performance across groups, indicative of an ability to use this visual information to potentially aid navigation. These findings suggest that individuals with WS rely on landmarks to a greater extent than TD children, both for learning a route and for retracing a recently learned route. TD children, but not individuals with WS, were able to fall back on the use of a sequential egocentric strategy to navigate when landmarks were not present. Only TD children therefore coded sequential route information simultaneously with landmark information. The results are discussed in relation to known atypical cortical development and perceptual-matching abilities

  13. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  14. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  15. Analysis of results of AZTRAN and AZKIND codes for a BWR; Analisis de resultados de los codigos AZTRAN y AZKIND para un BWR

    Energy Technology Data Exchange (ETDEWEB)

    Bastida O, G. E.; Vallejo Q, J. A.; Galicia A, J.; Francois L, J. L. [UNAM, Facultad de Ingenieria, Departamento de Sistemas Energeticos, Paseo Cuauhnahuac 8532, 62550 Jiutepec, Morelos (Mexico); Xolocostli M, J. V.; Rodriguez H, A.; Gomez T, A. M., E-mail: gbo729@yahoo.com.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2016-09-15

    This paper presents an analysis of results obtained from simulations performed with the neutron transport code AZTRAN and the kinetic code of neutron diffusion AZKIND, based on comparisons with models corresponding to a typical BWR, in order to verify the behavior and reliability of the values obtained with said code for its current development. For this, simulations of different geometries were made using validated nuclear codes, such as CASMO, MCNP5 and Serpent. The results obtained are considered adequate since they are comparable with those obtained and reported with other codes, based mainly on the neutron multiplication factor and the power distribution of the same. (Author)

  16. FUMACS-G, a Graphical User Interface for FUMACS Code Package

    International Nuclear Information System (INIS)

    Trontl, K.; Gergeta, K.; Smuc, T.

    2002-01-01

    The FUMACS (FUel MAnagement Code System) code package has been developed at Rudjer Boskovic Institute in year 1991 with the aim to enable in-core fuel management analysis of the NPP Krsko core for nominal conditions. Due to modernization and uprating of the NPP Krsko core in year 2000 and the original 1991 FUMACS inadequacy in simulating NPP Krsko core in these uprated conditions, in the year 2001 a new version of FUMACS code package has been developed - FUMACS/FEEC 2001. The code package upgrading procedure consisted of two main aspects: modifications of master files, libraries and codes necessary for proper modeling of the uprated NPP Krsko core and development of the code package structure suitable for Windows-32 environment. The latter included upgrading the source of the code from FORTRAN F77 to F90 level and development of a graphical, user-friendly interface with fully integrated electronic help system. Since the original FUMACS code package has been developed as a DOS based application, running of the code package on a Windows operating system proved to be rather inefficient and lacking in advantages of a standard Windows application. Therefore, FUMACS-G has been developed as a user friendly environment for handling off all project input and output files, as well as for easier overall project management. The design of FUMACS-G shell has been based on Microsoft application design guidelines. (author)

  17. Exploration of Rice Husk Compost as an Alternate Organic Manure to Enhance the Productivity of Blackgram in Typic Haplustalf and Typic Rhodustalf

    Directory of Open Access Journals (Sweden)

    Subramanium Thiyageshwari

    2018-02-01

    Full Text Available The present study was aimed at using cellulolytic bacterium Enhydrobacter and fungi Aspergillus sp. for preparing compost from rice husk (RH. Further, the prepared compost was tested for their effect on blackgram growth promotion along with different levels of recommended dose of fertilizer (RDF in black soil (typic Haplustalf and red soil (typic Rhodustalf soil. The results revealed that, inoculation with lignocellulolytic fungus (LCF Aspergillus sp. @ 2% was considered as the most efficient method of composting within a short period. Characterization of composted rice husk (CRH was examined through scanning electron microscope (SEM for identifying significant structural changes. At the end of composting, N, P and K content increased with decrease in CO2 evolution, C:N and C:P ratios. In comparison to inorganic fertilization, an increase in grain yield of 16% in typic Haplustalf and 17% in typic Rhodustalf soil over 100% RDF was obtained from the integrated application of CRH@ 5 t ha−1 with 50% RDF and biofertilizers. The crude protein content was maximum with the combined application of CRH, 50% RDF and biofertilizers of 20% and 21% in typic Haplustalf and typic Rhodustalf soils, respectively. Nutrient rich CRH has proved its efficiency on crop growth and soil fertility.

  18. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  19. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  20. Contamination profile on typical printed circuit board assemblies vs soldering process

    DEFF Research Database (Denmark)

    Conseil, Helene; Jellesen, Morten Stendahl; Ambat, Rajan

    2014-01-01

    Purpose – The purpose of this paper was to analyse typical printed circuit board assemblies (PCBAs) processed by reflow, wave or selective wave soldering for typical levels of process-related residues, resulting from a specific or combination of soldering processes. Typical solder flux residue...... structure was identified by Fourier transform infrared spectroscopy, while the concentration was measured using ion chromatography, and the electrical properties of the extracts were determined by measuring the leak current using a twin platinum electrode set-up. Localized extraction of residue was carried...

  1. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  2. Comprehensive nuclear model calculations: theory and use of the GNASH code

    International Nuclear Information System (INIS)

    Young, P.G.; Arthur, E.D.; Chadwick, M.B.

    1998-01-01

    The theory and operation of the nuclear reaction theory computer code GNASH is described, and detailed instructions are presented for code users. The code utilizes statistical Hauser-Feshbach theory with full angular momentum conservation and includes corrections for preequilibrium effects. This version is expected to be applicable for incident particle energies between 1 keV and 150 MeV and for incident photon energies to 140 MeV. General features of the code, the nuclear models that are utilized, input parameters needed to perform calculations, and the output quantities from typical problems are described in detail. A number of new features compared to previous versions are described in this manual, including the following: (1) inclusion of multiple preequilibrium processes, which allows the model calculations to be performed above 50 MeV; (2) a capability to calculate photonuclear reactions; (3) a method for determining the spin distribution of residual nuclei following preequilibrium reactions; and (4) a description of how preequilibrium spectra calculated with the FKK theory can be utilized (the 'FKK-GNASH' approach). The computational structure of the code and the subroutines and functions that are called are summarized as well. Two detailed examples are considered: 14-MeV neutrons incident on 93 Nb and 12-MeV neutrons incident on 238 U. The former example illustrates a typical calculation aimed at determining neutron, proton, and alpha emission spectra from 14-MeV reactions, and the latter example demonstrates use of the fission model in GNASH. Results from a variety of other cases are illustrated. (author)

  3. Level 1 - level 2 interface

    International Nuclear Information System (INIS)

    Boneham, P.

    2003-01-01

    The Plant Damage States (PDS) are the starting point for the level 2 analysis. A PDS is group of core damage sequences that are expected to have similar severe accident progressions. In this paper an overview of Level 1/Level 2 interface, example PDS parameters, example PDS definitions using codes and example Bridge Tree are presented. PDS frequency calculation (identification of sequences for each PDS in level 1,split some CD sequences which have different level 2 progressions), code calculations providing support for grouping decisions and timings as well as PDS frequencies and definitions input to level 2 are also discussed

  4. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  5. Thermal-hydraulic code qualification: ATHOS2 and data from Bugey 4 and Tricastin 1. Final report

    International Nuclear Information System (INIS)

    Masiello, P.J.

    1983-02-01

    Measured data from steam generators at the Bugey 4 and Tricastin 1 nuclear power plants operated by Electricite de France (EdF) have been used in the qualification of the ATHOS2 computer code. ATHOS2 is a three-dimensional, two-phase thermal-hydraulic code for the steady-state and transient analysis of recirculating-type steam generators. Predicted data for circulation ratio and secondary fluid temperature just above the tube sheet have been compared with corresponding data measured by EdF during on-site testing of Westinghouse Model 51A (Bugey 4) and 51M (Tricastin 1) steam generators. Comparative analyses have been performed for steady-state operating conditions at five power levels for each plant installation. The transient capabilities of the ATHOS2 code were examined in the simulation of an open-grid (load reject from 100% power) test conducted at Bugey 4. Results show that predicted data for secondary fluid temperature at eight locations just above the tube sheet are typically within 1.5 0 C of measured data

  6. Comparison of aerosol behavior codes with experimental results from a sodium fire in a containment

    International Nuclear Information System (INIS)

    Lhiaubet, G.; Kissane, M.P.; Seino, H.; Miyake, O.; Himeno, Y.

    1990-01-01

    The containment expert group (CONT), a subgroup of the CEC fast reactor Safety Working Group (SWG), has carried out several studies on the behavior of sodium aerosols which might form in a severe fast reactor accident during which primary sodium leaks into the secondary containment. These studies comprise an intercalibration of measurement devices used to determine the aerosol particle size spectrum, and the analysis and comparison of codes applied to the determination of aerosol behavior in a reactor containment. The paper outlines the results of measurements of typical data made for aerosols produced in a sodium fire and their comparison with results from different codes (PARDISEKO, AEROSIM, CONTAIN, AEROSOLS/B2). The sodium fire experiment took place at CEN-Cadarache (France) in a 400 m 3 vessel. The fire lasted 90 minutes and the aerosol measurements were made over 10 hours at different locations inside the vessel. The results showed that the suspended mass calculated along the time with different codes was in good agreement with the experiment. However, the calculated aerosol deposition on the walls was diverging and always significantly lower than the measured values

  7. PRESTO-PREP: a data preprocessor for the PRESTO-II code

    Energy Technology Data Exchange (ETDEWEB)

    Bell, M.A.; Emerson, C.J.; Fields, D.E.

    1984-07-01

    PRESTO-II is a computer code developed to evaluate possible health effects from shallow land disposal of low level radioactive wastes. PRESTO-PREP is a data preprocessor that has been developed to expedite the formation of input data sets for PRESTO-II. PRESTO-PREP utilizes a library of nuclide and risk-specific data. Given an initial waste inventory, the code creates the radionuclide portion of the associated input data set for PRESTO-II. 2 references.

  8. PRESTO-PREP: a data preprocessor for the PRESTO-II code

    International Nuclear Information System (INIS)

    Bell, M.A.; Emerson, C.J.; Fields, D.E.

    1984-07-01

    PRESTO-II is a computer code developed to evaluate possible health effects from shallow land disposal of low level radioactive wastes. PRESTO-PREP is a data preprocessor that has been developed to expedite the formation of input data sets for PRESTO-II. PRESTO-PREP utilizes a library of nuclide and risk-specific data. Given an initial waste inventory, the code creates the radionuclide portion of the associated input data set for PRESTO-II. 2 references

  9. Notes on nuclear reactor core analysis code: CITATION

    International Nuclear Information System (INIS)

    Cepraga, D.G.

    1980-01-01

    The method which has evolved over the years for making power reactor calculations is the multigroup diffusion method. The CITATION code is designed to solve multigroup neutronics problems with application of the finite-difference diffusion theory approximation to neutron transport in up to three-dimensional geometry. The first part of this paper presents information about the mathematical equations programmed along with background material and certain displays to convey the nature of some of the formulations. The results obtained with the CITATION code regarding the neutron and burnup core analysis for a typical PWR reactor are presented in the second part of this paper. (author)

  10. A proposed framework for computational fluid dynamics code calibration/validation

    International Nuclear Information System (INIS)

    Oberkampf, W.L.

    1993-01-01

    The paper reviews the terminology and methodology that have been introduced during the last several years for building confidence n the predictions from Computational Fluid Dynamics (CID) codes. Code validation terminology developed for nuclear reactor analyses and aerospace applications is reviewed and evaluated. Currently used terminology such as ''calibrated code,'' ''validated code,'' and a ''validation experiment'' is discussed along with the shortcomings and criticisms of these terms. A new framework is proposed for building confidence in CFD code predictions that overcomes some of the difficulties of past procedures and delineates the causes of uncertainty in CFD predictions. Building on previous work, new definitions of code verification and calibration are proposed. These definitions provide more specific requirements for the knowledge level of the flow physics involved and the solution accuracy of the given partial differential equations. As part of the proposed framework, categories are also proposed for flow physics research, flow modeling research, and the application of numerical predictions. The contributions of physical experiments, analytical solutions, and other numerical solutions are discussed, showing that each should be designed to achieve a distinctively separate purpose in building confidence in accuracy of CFD predictions. A number of examples are given for each approach to suggest methods for obtaining the highest value for CFD code quality assurance

  11. System verification and validation report for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the Verification and Validation Report for the TMAD code system, which includes the TMAD code and the LIBMAKR Code. The TMAD code was commissioned to facilitate the interpretation of moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, can also be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code will then interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data. The primary purpose of this document is to provide the results of the system testing and the conclusions based thereon. The results of the testing process are documented in the body of the report. Appendix A gives the test plan, including test procedures, used in conducting the tests. Appendix B lists the input data required to conduct the tests, and Appendices C and 0 list the numerical results of the tests

  12. Enforcing the use of API functions in Linux code

    DEFF Research Database (Denmark)

    Lawall, Julia; Muller, Gilles; Palix, Nicolas Jean-Michel

    2009-01-01

    In the Linux kernel source tree, header files typically define many small functions that have a simple behavior but are critical to ensure readability, correctness, and maintainability. We have observed, however, that some Linux code does not use these functions systematically. In this paper, we...... in the header file include/linux/usb.h....

  13. The PWR spectral code GELS. Pt. 1

    International Nuclear Information System (INIS)

    Penndorf, K.; Schult, F.; Schulz, G.

    1976-01-01

    The code procedures group constant libraries for the static PWR design of whatever fuel cycle - Uranium, Thorium, or Plutonium. The whole reach of temperatures is covered and the treatment of strong lumped absorbers as control or burnable poison pins is included. The main features are: 1) Good accuracy in spite of not fitting the material data to critical experiments; 2) speed and relatively low computer equipment; 3) restriction to PWR's only. In case of demands for higher accuracy there is a further restriction concerning the library data of the epithermal resonance absorbers: They are strictly valid only for several special lattice geometrics. Three samples are given each representing a typical application of the code. Two of them likewise are demonstrations of recalculated experiments. (orig.) [de

  14. Safety analysis of loss of flow transients in a typical research reactor by RELAP5/MOD3.3

    International Nuclear Information System (INIS)

    Di Maro, B.; Pierro, F.; Adorni, M.; Bousbia Salah, A.; D'Auria, F.

    2003-01-01

    The main aim of the following study is to assess the RELAP5/MOD3.3 code capability in simulating transient dynamic behaviour in nuclear research reactors. For this purpose typical loss of flow transient in a representative MTR (Metal Test Reactor) fuel type Research Reactor is considered. The transient herein considered is a sudden pump trip followed by the opening of a safety valve in order to allow passive decay heat removal by natural convection. During such transient the coolant flow decay, originally downward, leads to a flow reversal and the cooling process of the core passes from forced, mixed and finally to natural circulation. This fact makes it suitable for evaluating the new features of RELAP5 to simulate such specific operating conditions. The instantaneous reactor power is derived through the point kinetic calculation, both protected and unprotected cases are considered (with and without Scram). The results obtained from this analysis were also compared with previous results obtained by old version RELAP5/MOD2 code. (author)

  15. Fluid dynamics and heat transfer methods for the TRAC code

    International Nuclear Information System (INIS)

    Reed, W.H.; Kirchner, W.L.

    1977-01-01

    A computer code called TRAC is being developed for analysis of loss-of-coolant accidents and other transients in light water reactors. This code involves a detailed, multidimensional description of two-phase flow coupled implicitly through appropriate heat transfer coefficients with a simulation of the temperature field in fuel and structural material. Because TRAC utilizes about 1000 fluid mesh cells to describe an LWR system, whereas existing lumped parameter codes typically involve fewer than 100 fluid cells, new highly implicit difference techniques are developed that yield acceptable computing times on modern computers. Several test problems for which experimental data are available, including blowdown of single pipe and loop configurations with and without heated walls, have been computed with TRAC. Excellent agreement with experimental results has been obtained

  16. Memory for pictures and words as a function of level of processing: Depth or dual coding?

    Science.gov (United States)

    D'Agostino, P R; O'Neill, B J; Paivio, A

    1977-03-01

    The experiment was designed to test differential predictions derived from dual-coding and depth-of-processing hypotheses. Subjects under incidental memory instructions free recalled a list of 36 test events, each presented twice. Within the list, an equal number of events were assigned to structural, phonemic, and semantic processing conditions. Separate groups of subjects were tested with a list of pictures, concrete words, or abstract words. Results indicated that retention of concrete words increased as a direct function of the processing-task variable (structural memory performance. These data provided strong support for the dual-coding model.

  17. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    Science.gov (United States)

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  18. Effect of a typical in-season week on strength jump and sprint performances in national-level female basketball players.

    Science.gov (United States)

    Delextrat, A; Trochym, E; Calleja-González, J

    2012-04-01

    The aim of this study was to investigate the effect of a typical in-season week including four practice sessions and one competitive game on strength, jump and sprint performances in national-level female basketball players. Nine female basketball players (24.3±4.1 years old, 173.0±7.9 cm, 65.1±10.9 kg, 21.1±3.8% body fat) participated in ten testing sessions, before and immediately after practices and game (five pre- and five post-tests). Each session involved isokinetic peak torque measurements of the quadriceps and hamstrings of the dominant leg at 60º.s-1, countermovement jump (CMJ) and 20-m sprint. Fluid loss and subjective training load were measured during each practice session, while the frequencies of the main movements performed during the game were recorded. A two-way ANOVA was used to asses the effect of each practice/game and the effect of the day of the week on performances, and the relationship between performance variations and variables recorded during practices/game were analyzed by a Pearson correlation coefficient. Individual sessions induced significant decreases in lower limb strength (from 4.6 to 10.9%, Pjump ability, and monitor the recovery of their players' strength, sprint and jump capacities following specific sessions.

  19. Computer codes for the analysis of flask impact problems

    International Nuclear Information System (INIS)

    Neilson, A.J.

    1984-09-01

    This review identifies typical features of the design of transportation flasks and considers some of the analytical tools required for the analysis of impact events. Because of the complexity of the physical problem, it is unlikely that a single code will adequately deal with all the aspects of the impact incident. Candidate codes are identified on the basis of current understanding of their strengths and limitations. It is concluded that the HONDO-II, DYNA3D AND ABAQUS codes which ar already mounted on UKAEA computers will be suitable tools for use in the analysis of experiments conducted in the proposed AEEW programme and of general flask impact problems. Initial attention should be directed at the DYNA3D and ABAQUS codes with HONDO-II being reserved for situations where the three-dimensional elements of DYNA3D may provide uneconomic simulations in planar or axisymmetric geometries. Attention is drawn to the importance of access to suitable mesh generators to create the nodal coordinate and element topology data required by these structural analysis codes. (author)

  20. Cooperation of experts' opinion, experiment and computer code development

    International Nuclear Information System (INIS)

    Wolfert, K.; Hicken, E.

    The connection between code development, code assessment and confidence in the analysis of transients will be discussed. In this manner, the major sources of errors in the codes and errors in applications of the codes will be shown. Standard problem results emphasize that, in order to have confidence in licensing statements, the codes must be physically realistic and the code user must be qualified and experienced. We will discuss why there is disagreement between the licensing authority and vendor concerning assessment of the fullfillment of safety goal requirements. The answer to the question lies in the different confidence levels of the assessment of transient analysis. It is expected that a decrease in the disagreement will result from an increased confidence level. Strong efforts will be made to increase this confidence level through improvements in the codes, experiments and related organizational strcutures. Because of the low probability for loss-of-coolant-accidents in the nuclear industry, assessment must rely on analytical techniques and experimental investigations. (orig./HP) [de

  1. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  2. A Typical Synergy

    Science.gov (United States)

    van Noort, Thomas; Achten, Peter; Plasmeijer, Rinus

    We present a typical synergy between dynamic types (dynamics) and generalised algebraic datatypes (GADTs). The former provides a clean approach to integrating dynamic typing in a statically typed language. It allows values to be wrapped together with their type in a uniform package, deferring type unification until run time using a pattern match annotated with the desired type. The latter allows for the explicit specification of constructor types, as to enforce their structural validity. In contrast to ADTs, GADTs are heterogeneous structures since each constructor type is implicitly universally quantified. Unfortunately, pattern matching only enforces structural validity and does not provide instantiation information on polymorphic types. Consequently, functions that manipulate such values, such as a type-safe update function, are cumbersome due to boilerplate type representation administration. In this paper we focus on improving such functions by providing a new GADT annotation via a natural synergy with dynamics. We formally define the semantics of the annotation and touch on novel other applications of this technique such as type dispatching and enforcing type equality invariants on GADT values.

  3. Random linear network coding for streams with unequally sized packets

    DEFF Research Database (Denmark)

    Taghouti, Maroua; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2016-01-01

    State of the art Random Linear Network Coding (RLNC) schemes assume that data streams generate packets with equal sizes. This is an assumption that results in the highest efficiency gains for RLNC. A typical solution for managing unequal packet sizes is to zero-pad the smallest packets. However, ...

  4. Architectural and Algorithmic Requirements for a Next-Generation System Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    V.A. Mousseau

    2010-05-01

    This document presents high-level architectural and system requirements for a next-generation system analysis code (NGSAC) to support reactor safety decision-making by plant operators and others, especially in the context of light water reactor plant life extension. The capabilities of NGSAC will be different from those of current-generation codes, not only because computers have evolved significantly in the generations since the current paradigm was first implemented, but because the decision-making processes that need the support of next-generation codes are very different from the decision-making processes that drove the licensing and design of the current fleet of commercial nuclear power reactors. The implications of these newer decision-making processes for NGSAC requirements are discussed, and resulting top-level goals for the NGSAC are formulated. From these goals, the general architectural and system requirements for the NGSAC are derived.

  5. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  6. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  7. The Light-Water-Reactor Version of the URANUS Integral fuel-rod code

    Energy Technology Data Exchange (ETDEWEB)

    Labmann, K; Moreno, A

    1977-07-01

    The LWR version of the URANUS code, a digital computer programme for the thermal and mechanical analysis of fuel rods, is presented. Material properties are discussed and their effect on integral fuel rod behaviour elaborated via URANUS results for some carefully selected reference experiments. The numerical results do not represent post-irradiation analyses of in-pile experiments, they illustrate rather typical and diverse URANUS capabilities. The performance test shows that URANUS is reliable and efficient, thus the code is a most valuable tool in fuel rod analysis work. K. LaBmann developed the LWR version of the URANUS code, material properties were reviewed and supplied by A. Moreno. (Author) 41 refs.

  8. The light-water-reactor version of the Uranus integral fuel-rod code

    International Nuclear Information System (INIS)

    Moreno, A.; Lassmann, K.

    1977-01-01

    The LWR of the Uranus code, a digital computer programme for the thermal and mechanical analysis of fuel rods, is presented. Material properties are discussed and their effect on integral fuel rod behaviour elaborated via Uranus results for some carefully selected reference experiments. The numerical results do not represent post-irradiation analysis of in-pile experiments, they illustrate rather typical and diverse Uranus capabilities. The performance test shows that Uranus is reliable and efficient, thus the code is a most valuable tool in fuel fod analysis work. K. Lassmann developed the LWR version of the Uranus code, material properties were reviewed and supplied by A. Moreno. (author)

  9. Nonlinear to Linear Elastic Code Coupling in 2-D Axisymmetric Media.

    Energy Technology Data Exchange (ETDEWEB)

    Preston, Leiph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-08-01

    Explosions within the earth nonlinearly deform the local media, but at typical seismological observation distances, the seismic waves can be considered linear. Although nonlinear algorithms can simulate explosions in the very near field well, these codes are computationally expensive and inaccurate at propagating these signals to great distances. A linearized wave propagation code, coupled to a nonlinear code, provides an efficient mechanism to both accurately simulate the explosion itself and to propagate these signals to distant receivers. To this end we have coupled Sandia's nonlinear simulation algorithm CTH to a linearized elastic wave propagation code for 2-D axisymmetric media (axiElasti) by passing information from the nonlinear to the linear code via time-varying boundary conditions. In this report, we first develop the 2-D axisymmetric elastic wave equations in cylindrical coordinates. Next we show how we design the time-varying boundary conditions passing information from CTH to axiElasti, and finally we demonstrate the coupling code via a simple study of the elastic radius.

  10. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  11. A NEW HYBRID N-BODY-COAGULATION CODE FOR THE FORMATION OF GAS GIANT PLANETS

    International Nuclear Information System (INIS)

    Bromley, Benjamin C.; Kenyon, Scott J.

    2011-01-01

    We describe an updated version of our hybrid N-body-coagulation code for planet formation. In addition to the features of our 2006-2008 code, our treatment now includes algorithms for the one-dimensional evolution of the viscous disk, the accretion of small particles in planetary atmospheres, gas accretion onto massive cores, and the response of N-bodies to the gravitational potential of the gaseous disk and the swarm of planetesimals. To validate the N-body portion of the algorithm, we use a battery of tests in planetary dynamics. As a first application of the complete code, we consider the evolution of Pluto-mass planetesimals in a swarm of 0.1-1 cm pebbles. In a typical evolution time of 1-3 Myr, our calculations transform 0.01-0.1 M sun disks of gas and dust into planetary systems containing super-Earths, Saturns, and Jupiters. Low-mass planets form more often than massive planets; disks with smaller α form more massive planets than disks with larger α. For Jupiter-mass planets, masses of solid cores are 10-100 M + .

  12. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 2. Special test cases

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-08-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. Volume 1, titled ''Guideline Approach,'' consists of Chapters 1 through 5 and a glossary. Chapters 2 through 5 provide the more detailed discussions about the code selection approach. This volume, Volume 2, consists of four appendices reporting on the technical evaluation test cases designed to help verify the accuracy of ground-water transport codes. 20 refs

  13. Circular codes revisited: a statistical approach.

    Science.gov (United States)

    Gonzalez, D L; Giannerini, S; Rosa, R

    2011-04-21

    In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Bursts generate a non-reducible spike-pattern code

    Directory of Open Access Journals (Sweden)

    Hugo G Eyherabide

    2009-05-01

    Full Text Available On the single-neuron level, precisely timed spikes can either constitute firing-rate codes or spike-pattern codes that utilize the relative timing between consecutive spikes. There has been little experimental support for the hypothesis that such temporal patterns contribute substantially to information transmission. Using grasshopper auditory receptors as a model system, we show that correlations between spikes can be used to represent behaviorally relevant stimuli. The correlations reflect the inner structure of the spike train: a succession of burst-like patterns. We demonstrate that bursts with different spike counts encode different stimulus features, such that about 20% of the transmitted information corresponds to discriminating between different features, and the remaining 80% is used to allocate these features in time. In this spike-pattern code, the "what" and the "when" of the stimuli are encoded in the duration of each burst and the time of burst onset, respectively. Given the ubiquity of burst firing, we expect similar findings also for other neural systems.

  15. Arbitrariness is not enough: towards a functional approach to the genetic code.

    Science.gov (United States)

    Lacková, Ľudmila; Matlach, Vladimír; Faltýnek, Dan

    2017-12-01

    Arbitrariness in the genetic code is one of the main reasons for a linguistic approach to molecular biology: the genetic code is usually understood as an arbitrary relation between amino acids and nucleobases. However, from a semiotic point of view, arbitrariness should not be the only condition for definition of a code, consequently it is not completely correct to talk about "code" in this case. Yet we suppose that there exist a code in the process of protein synthesis, but on a higher level than the nucleic bases chains. Semiotically, a code should be always associated with a function and we propose to define the genetic code not only relationally (in basis of relation between nucleobases and amino acids) but also in terms of function (function of a protein as meaning of the code). Even if the functional definition of meaning in the genetic code has been discussed in the field of biosemiotics, its further implications have not been considered. In fact, if the function of a protein represents the meaning of the genetic code (the sign's object), then it is crucial to reconsider the notion of its expression (the sign) as well. In our contribution, we will show that the actual model of the genetic code is not the only possible and we will propose a more appropriate model from a semiotic point of view.

  16. On the Performance of a Multi-Edge Type LDPC Code for Coded Modulation

    NARCIS (Netherlands)

    Cronie, H.S.

    2005-01-01

    We present a method to combine error-correction coding and spectral-efficient modulation for transmission over the Additive White Gaussian Noise (AWGN) channel. The code employs signal shaping which can provide a so-called shaping gain. The code belongs to the family of sparse graph codes for which

  17. Alternate symbol inversion for improved symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Simon, M. K.; Smith, J. G.

    1980-01-01

    Inverting alternate symbols of the encoder output of a convolutionally coded system provides sufficient density of symbol transitions to guarantee adequate symbol synchronizer performance, a guarantee otherwise lacking. Although alternate symbol inversion may increase or decrease the average transition density, depending on the data source model, it produces a maximum number of contiguous symbols without transition for a particular class of convolutional codes, independent of the data source model. Further, this maximum is sufficiently small to guarantee acceptable symbol synchronizer performance for typical applications. Subsequent inversion of alternate detected symbols permits proper decoding.

  18. Hominoid-specific de novo protein-coding genes originating from long non-coding RNAs.

    Directory of Open Access Journals (Sweden)

    Chen Xie

    2012-09-01

    Full Text Available Tinkering with pre-existing genes has long been known as a major way to create new genes. Recently, however, motherless protein-coding genes have been found to have emerged de novo from ancestral non-coding DNAs. How these genes originated is not well addressed to date. Here we identified 24 hominoid-specific de novo protein-coding genes with precise origination timing in vertebrate phylogeny. Strand-specific RNA-Seq analyses were performed in five rhesus macaque tissues (liver, prefrontal cortex, skeletal muscle, adipose, and testis, which were then integrated with public transcriptome data from human, chimpanzee, and rhesus macaque. On the basis of comparing the RNA expression profiles in the three species, we found that most of the hominoid-specific de novo protein-coding genes encoded polyadenylated non-coding RNAs in rhesus macaque or chimpanzee with a similar transcript structure and correlated tissue expression profile. According to the rule of parsimony, the majority of these hominoid-specific de novo protein-coding genes appear to have acquired a regulated transcript structure and expression profile before acquiring coding potential. Interestingly, although the expression profile was largely correlated, the coding genes in human often showed higher transcriptional abundance than their non-coding counterparts in rhesus macaque. The major findings we report in this manuscript are robust and insensitive to the parameters used in the identification and analysis of de novo genes. Our results suggest that at least a portion of long non-coding RNAs, especially those with active and regulated transcription, may serve as a birth pool for protein-coding genes, which are then further optimized at the transcriptional level.

  19. A Source-level Energy Optimization Framework for Mobile Applications

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    strategies. The framework also lays a foundation for the code optimization by automatic tools. To the best of our knowledge, our work is the first that achieves this for a high-level language such as Java. In a case study, the experimental evaluation shows that our approach is able to save from 6.4% to 50...... process. The source code is the interface between the developer and hardware resources. In this paper, we propose an energy optimization framework guided by a source code energy model that allows developers to be aware of energy usage induced by the code and to apply very targeted source-level refactoring...

  20. UNIPIC code for simulations of high power microwave devices

    International Nuclear Information System (INIS)

    Wang Jianguo; Zhang Dianhui; Wang Yue; Qiao Hailiang; Li Xiaoze; Liu Chunliang; Li Yongdong; Wang Hongguang

    2009-01-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  1. UNIPIC code for simulations of high power microwave devices

    Science.gov (United States)

    Wang, Jianguo; Zhang, Dianhui; Liu, Chunliang; Li, Yongdong; Wang, Yue; Wang, Hongguang; Qiao, Hailiang; Li, Xiaoze

    2009-03-01

    In this paper, UNIPIC code, a new member in the family of fully electromagnetic particle-in-cell (PIC) codes for simulations of high power microwave (HPM) generation, is introduced. In the UNIPIC code, the electromagnetic fields are updated using the second-order, finite-difference time-domain (FDTD) method, and the particles are moved using the relativistic Newton-Lorentz force equation. The convolutional perfectly matched layer method is used to truncate the open boundaries of HPM devices. To model curved surfaces and avoid the time step reduction in the conformal-path FDTD method, CP weakly conditional-stable FDTD (WCS FDTD) method which combines the WCS FDTD and CP-FDTD methods, is implemented. UNIPIC is two-and-a-half dimensional, is written in the object-oriented C++ language, and can be run on a variety of platforms including WINDOWS, LINUX, and UNIX. Users can use the graphical user's interface to create the geometric structures of the simulated HPM devices, or input the old structures created before. Numerical experiments on some typical HPM devices by using the UNIPIC code are given. The results are compared to those obtained from some well-known PIC codes, which agree well with each other.

  2. Generation of a typical meteorological year for Hong Kong

    International Nuclear Information System (INIS)

    Chan, Apple L.S.; Chow, T.T.; Fong, Square K.F.; Lin, John Z.

    2006-01-01

    Weather data can vary significantly from year to year. There is a need to derive typical meteorological year (TMY) data to represent the long-term typical weather condition over a year, which is one of the crucial factors for successful building energy simulation. In this paper, various types of typical weather data sets including the TMY, TMY2, WYEC, WYEC2, WYEC2W, WYEC2T and IWEC were reviewed. The Finkelstein-Schafer statistical method was applied to analyze the hourly measured weather data of a 25-year period (1979-2003) in Hong Kong and select representative typical meteorological months (TMMs). The cumulative distribution function (CDF) for each year was compared with the CDF for the long-term composite of all the years in the period for four major weather indices including dry bulb temperature, dew point temperature, wind speed and solar radiation. Typical months for each of the 12 calendar months from the period of years were selected by choosing the one with the smallest deviation from the long-term CDF. The 12 TMMs selected from the different years were used for formulation of a TMY for Hong Kong

  3. What is typical is good: The influence of face typicality on perceived trustworthiness

    NARCIS (Netherlands)

    Sofer, C.; Dotsch, R.; Wigboldus, D.H.J.; Todorov, A.T.

    2015-01-01

    The role of face typicality in face recognition is well established, but it is unclear whether face typicality is important for face evaluation. Prior studies have focused mainly on typicality's influence on attractiveness, although recent studies have cast doubt on its importance for attractiveness

  4. Identifying typical patterns of vulnerability: A 5-step approach based on cluster analysis

    Science.gov (United States)

    Sietz, Diana; Lüdeke, Matthias; Kok, Marcel; Lucas, Paul; Carsten, Walther; Janssen, Peter

    2013-04-01

    Specific processes that shape the vulnerability of socio-ecological systems to climate, market and other stresses derive from diverse background conditions. Within the multitude of vulnerability-creating mechanisms, distinct processes recur in various regions inspiring research on typical patterns of vulnerability. The vulnerability patterns display typical combinations of the natural and socio-economic properties that shape a systems' vulnerability to particular stresses. Based on the identification of a limited number of vulnerability patterns, pattern analysis provides an efficient approach to improving our understanding of vulnerability and decision-making for vulnerability reduction. However, current pattern analyses often miss explicit descriptions of their methods and pay insufficient attention to the validity of their groupings. Therefore, the question arises as to how do we identify typical vulnerability patterns in order to enhance our understanding of a systems' vulnerability to stresses? A cluster-based pattern recognition applied at global and local levels is scrutinised with a focus on an applicable methodology and practicable insights. Taking the example of drylands, this presentation demonstrates the conditions necessary to identify typical vulnerability patterns. They are summarised in five methodological steps comprising the elicitation of relevant cause-effect hypotheses and the quantitative indication of mechanisms as well as an evaluation of robustness, a validation and a ranking of the identified patterns. Reflecting scale-dependent opportunities, a global study is able to support decision-making with insights into the up-scaling of interventions when available funds are limited. In contrast, local investigations encourage an outcome-based validation. This constitutes a crucial step in establishing the credibility of the patterns and hence their suitability for informing extension services and individual decisions. In this respect, working at

  5. Fluid dynamics and heat transfer methods for the TRAC code

    International Nuclear Information System (INIS)

    Reed, W.H.; Kirchner, W.L.

    1977-01-01

    A computer code called TRAC is being developed for analysis of loss-of-coolant accidents and other transients in light water reactors. This code involves a detailed, multidimensional description of two-phase flow coupled implicitly through appropriate heat transfer coefficients with a simulation of the temperature field in fuel and structural material. Because TRAC utilizes about 1000 fluid mesh cells to describe an LWR system, whereas existing lumped parameter codes typically involve fewer than 100 fluid cells, we have developed new highly implicit difference techniques that yield acceptable computing times on modern computers. Several test problems for which experimental data are available, including blowdown of single pipe and loop configurations with and without heated walls, have been computed with TRAC. Excellent agreement with experimental results has been obtained. (author)

  6. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed

  7. Evaluating geographic imputation approaches for zip code level data: an application to a study of pediatric diabetes

    Directory of Open Access Journals (Sweden)

    Puett Robin C

    2009-10-01

    Full Text Available Abstract Background There is increasing interest in the study of place effects on health, facilitated in part by geographic information systems. Incomplete or missing address information reduces geocoding success. Several geographic imputation methods have been suggested to overcome this limitation. Accuracy evaluation of these methods can be focused at the level of individuals and at higher group-levels (e.g., spatial distribution. Methods We evaluated the accuracy of eight geo-imputation methods for address allocation from ZIP codes to census tracts at the individual and group level. The spatial apportioning approaches underlying the imputation methods included four fixed (deterministic and four random (stochastic allocation methods using land area, total population, population under age 20, and race/ethnicity as weighting factors. Data included more than 2,000 geocoded cases of diabetes mellitus among youth aged 0-19 in four U.S. regions. The imputed distribution of cases across tracts was compared to the true distribution using a chi-squared statistic. Results At the individual level, population-weighted (total or under age 20 fixed allocation showed the greatest level of accuracy, with correct census tract assignments averaging 30.01% across all regions, followed by the race/ethnicity-weighted random method (23.83%. The true distribution of cases across census tracts was that 58.2% of tracts exhibited no cases, 26.2% had one case, 9.5% had two cases, and less than 3% had three or more. This distribution was best captured by random allocation methods, with no significant differences (p-value > 0.90. However, significant differences in distributions based on fixed allocation methods were found (p-value Conclusion Fixed imputation methods seemed to yield greatest accuracy at the individual level, suggesting use for studies on area-level environmental exposures. Fixed methods result in artificial clusters in single census tracts. For studies

  8. Theory of Mind, Socio-Emotional Problem-Solving, Socio-Emotional Regulation in Children with Intellectual Disability and in Typically Developing Children

    Science.gov (United States)

    Baurain, Celine; Nader-Grosbois, Nathalie

    2013-01-01

    This study has examined the link between social information processing (SIP) and socio-emotional regulation (SER) in 45 children with intellectual disability (ID) and 45 typically developing (TD) children, matched on their developmental age. A Coding Grid of SER, focusing on Emotional Expression, Social Behaviour and Behaviours towards Social…

  9. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  10. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y.V.; Zaitsev, S.I.; Tarankov, G.A. [OKB Gidropress (Russian Federation)

    1995-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  11. Verification of Dinamika-5 code on experimental data of water level behaviour in PGV-440 under dynamic conditions

    Energy Technology Data Exchange (ETDEWEB)

    Beljaev, Y V; Zaitsev, S I; Tarankov, G A [OKB Gidropress (Russian Federation)

    1996-12-31

    Comparison of the results of calculational analysis with experimental data on water level behaviour in horizontal steam generator (PGV-440) under the conditions with cessation of feedwater supply is presented in the report. Calculational analysis is performed using DIMANIKA-5 code, experimental data are obtained at Kola NPP-4. (orig.). 2 refs.

  12. Origin of an alternative genetic code in the extremely small and GC-rich genome of a bacterial symbiont.

    Directory of Open Access Journals (Sweden)

    John P McCutcheon

    2009-07-01

    Full Text Available The genetic code relates nucleotide sequence to amino acid sequence and is shared across all organisms, with the rare exceptions of lineages in which one or a few codons have acquired novel assignments. Recoding of UGA from stop to tryptophan has evolved independently in certain reduced bacterial genomes, including those of the mycoplasmas and some mitochondria. Small genomes typically exhibit low guanine plus cytosine (GC content, and this bias in base composition has been proposed to drive UGA Stop to Tryptophan (Stop-->Trp recoding. Using a combination of genome sequencing and high-throughput proteomics, we show that an alpha-Proteobacterial symbiont of cicadas has the unprecedented combination of an extremely small genome (144 kb, a GC-biased base composition (58.4%, and a coding reassignment of UGA Stop-->Trp. Although it is not clear why this tiny genome lacks the low GC content typical of other small bacterial genomes, these observations support a role of genome reduction rather than base composition as a driver of codon reassignment.

  13. Communicating pictures a course in image and video coding

    CERN Document Server

    Bull, David R

    2014-01-01

    Communicating Pictures starts with a unique historical perspective of the role of images in communications and then builds on this to explain the applications and requirements of a modern video coding system. It draws on the author's extensive academic and professional experience of signal processing and video coding to deliver a text that is algorithmically rigorous, yet accessible, relevant to modern standards, and practical. It offers a thorough grounding in visual perception, and demonstrates how modern image and video compression methods can be designed in order to meet the rate-quality performance levels demanded by today's applications, networks and users. With this book you will learn: Practical issues when implementing a codec, such as picture boundary extension and complexity reduction, with particular emphasis on efficient algorithms for transforms, motion estimators and error resilience Conflicts between conventional video compression, based on variable length coding and spatiotemporal prediction,...

  14. A class of Sudan-decodable codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2000-01-01

    In this article, Sudan's algorithm is modified into an efficient method to list-decode a class of codes which can be seen as a generalization of Reed-Solomon codes. The algorithm is specialized into a very efficient method for unique decoding. The code construction can be generalized based...... on algebraic-geometry codes and the decoding algorithms are generalized accordingly. Comparisons with Reed-Solomon and Hermitian codes are made....

  15. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  16. A distributed code for colour in natural scenes derived from centre-surround filtered cone signals

    Directory of Open Access Journals (Sweden)

    Christian Johannes Kellner

    2013-09-01

    Full Text Available In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for colour. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in colour space. Here we investigated how the nonlinear spatiochromatic filtering in the retina influences the encoding of colour signals. Cone signals were derived from hyperspectral images of natural scenes and pre-processed by centre-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in colour space, similar to the population code of colour in the early visual cortex. Our results indicate that spatiochromatic processing in the retina leads to a more distributed and more efficient code for natural scenes.

  17. Analysis of the OECD main steam line break benchmark using ANC-K/MIDAC code

    International Nuclear Information System (INIS)

    Aoki, Shigeaki; Tahara, Yoshihisa; Suemura, Takayuki; Ogawa, Junto

    2004-01-01

    A three-dimensional (3D) neutronics and thermal-and-hydraulics (T/H) coupling code ANC-K/MIDAC has been developed. It is the combination of the 3D nodal kinetic code ANC-K and the 3D drift flux thermal hydraulic code MIDAC. In order to verify the adequacy of this code, we have performed several international benchmark problems. In this paper, we show the calculation results of ''OECD Main Steam Line Break Benchmark (MSLB benchmark)'', which gives the typical local power peaking problem. And we calculated the return-to-power scenario of the Phase II problem. The comparison of the results shows the very good agreement of important core parameters between the ANC-K/MIDAC and other participant codes. (author)

  18. PTL: A Propositional Typicality Logic

    CSIR Research Space (South Africa)

    Booth, R

    2012-09-01

    Full Text Available consequence relations first studied by Lehmann and col- leagues in the 90?s play a central role in nonmonotonic reasoning [13, 14]. This has been the case due to at least three main reasons. Firstly, they are based on semantic constructions that are elegant...) j ; 6j : ^ j PTL: A Propositional Typicality Logic 3 The semantics of (propositional) rational consequence is in terms of ranked models. These are partially ordered structures in which the ordering is modular. Definition 1. Given a set S...

  19. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  20. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  1. Accuracy assessment of a new Monte Carlo based burnup computer code

    International Nuclear Information System (INIS)

    El Bakkari, B.; ElBardouni, T.; Nacir, B.; ElYounoussi, C.; Boulaich, Y.; Meroun, O.; Zoubair, M.; Chakir, E.

    2012-01-01

    Highlights: ► A new burnup code called BUCAL1 was developed. ► BUCAL1 uses the MCNP tallies directly in the calculation of the isotopic inventories. ► Validation of BUCAL1 was done by code to code comparison using VVER-1000 LEU Benchmark Assembly. ► Differences from BM value were found to be ± 600 pcm for k ∞ and ±6% for the isotopic compositions. ► The effect on reactivity due to the burnup of Gd isotopes is well reproduced by BUCAL1. - Abstract: This study aims to test for the suitability and accuracy of a new home-made Monte Carlo burnup code, called BUCAL1, by investigating and predicting the neutronic behavior of a “VVER-1000 LEU Assembly Computational Benchmark”, at lattice level. BUCAL1 uses MCNP tally information directly in the computation; this approach allows performing straightforward and accurate calculation without having to use the calculated group fluxes to perform transmutation analysis in a separate code. ENDF/B-VII evaluated nuclear data library was used in these calculations. Processing of the data library is performed using recent updates of NJOY99 system. Code to code comparisons with the reported Nuclear OECD/NEA results are presented and analyzed.

  2. BOA, Beam Optics Analyzer A Particle-In-Cell Code

    International Nuclear Information System (INIS)

    Bui, Thuc

    2007-01-01

    The program was tasked with implementing time dependent analysis of charges particles into an existing finite element code with adaptive meshing, called Beam Optics Analyzer (BOA). BOA was initially funded by a DOE Phase II program to use the finite element method with adaptive meshing to track particles in unstructured meshes. It uses modern programming techniques, state-of-the-art data structures, so that new methods, features and capabilities are easily added and maintained. This Phase II program was funded to implement plasma simulations in BOA and extend its capabilities to model thermal electrons, secondary emissions, self magnetic field and implement a more comprehensive post-processing and feature-rich GUI. The program was successful in implementing thermal electrons, secondary emissions, and self magnetic field calculations. The BOA GUI was also upgraded significantly, and CCR is receiving interest from the microwave tube and semiconductor equipment industry for the code. Implementation of PIC analysis was partially successful. Computational resource requirements for modeling more than 2000 particles begin to exceed the capability of most readily available computers. Modern plasma analysis typically requires modeling of approximately 2 million particles or more. The problem is that tracking many particles in an unstructured mesh that is adapting becomes inefficient. In particular memory requirements become excessive. This probably makes particle tracking in unstructured meshes currently unfeasible with commonly available computer resources. Consequently, Calabazas Creek Research, Inc. is exploring hybrid codes where the electromagnetic fields are solved on the unstructured, adaptive mesh while particles are tracked on a fixed mesh. Efficient interpolation routines should be able to transfer information between nodes of the two meshes. If successfully developed, this could provide high accuracy and reasonable computational efficiency.

  3. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  4. EPR-technical codes - a common basis for the EPR

    International Nuclear Information System (INIS)

    Zaiss, W.; Appell, B.

    1997-01-01

    The design and construction of Nuclear Power Plants implies a full set of codes and standards to define the construction rules of components and equipment. Rules are existing and are currently implemented, respectively in France and Germany (mainly RCCs and KTA safety standards). In the frame of the EPR-project, the common objective requires an essential industrial work programme between engineers from both countries to elaborate a common set of codes and regulations. These new industrial rules are called the ETCs (EPR Technical Codes). In the hierarchy the ETCs are - in case of France - on the common level of basic safety rules (RFS), design and construction rules (RCC) and - in Germany - belonging to RSK guidelines and KTA safety standards. A set of six ETCs will be elaborated to cover: safety and process, mechanical components, electrical equipment, instrumentation and control, civil works, fire protection. (orig.)

  5. Evaluation of Computational Fluids Dynamics (CFD) code Open FOAM in the study of the pressurized thermal stress of PWR reactors. Comparison with the commercial code Ansys-CFX

    International Nuclear Information System (INIS)

    Martinez, M.; Barrachina, T.; Miro, R.; Verdu Martin, G.; Chiva, S.

    2012-01-01

    In this work is proposed to evaluate the potential of the OpenFOAM code for the simulation of typical fluid flows in reactors PWR, in particular for the study of pressurized thermal stress. Test T1-1 has been simulated , within the OECD ROSA project, with the objective of evaluating the performance of the code OpenFOAM and models of turbulence that has implemented to capture the effect of the thrust forces in the case study.

  6. SKYSHIN: A computer code for calculating radiation dose over a barrier

    International Nuclear Information System (INIS)

    Atwood, C.L.; Boland, J.R.; Dickman, P.T.

    1986-11-01

    SKYSHIN is a computer code for calculating the radioactive dose (mrem), when there is a barrier between the point source and the receptor. The two geometrical configurations considered are: the source and receptor separated by a rectangular wall, and the source at the bottom of a cylindrical hole in the ground. Each gamma ray traveling over the barrier is assumed to be scattered at a single point. The dose to a receptor from such paths is numerically integrated for the total dose, with symmetry used to reduce the triple integral to a double integral. The buildup factor used along a straight line through air is based on published data, and extrapolated in a stable way to low energy levels. This buildup factor was validated by comparing calculated and experimental line-of-sight doses. The entire code shows good agreement to limited field data. The code runs on a CDC or on a Vax computer, and could be modified easily for others

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. Code-Switching: L1-Coded Mediation in a Kindergarten Foreign Language Classroom

    Science.gov (United States)

    Lin, Zheng

    2012-01-01

    This paper is based on a qualitative inquiry that investigated the role of teachers' mediation in three different modes of coding in a kindergarten foreign language classroom in China (i.e. L2-coded intralinguistic mediation, L1-coded cross-lingual mediation, and L2-and-L1-mixed mediation). Through an exploratory examination of the varying effects…

  9. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    Science.gov (United States)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years

  10. Recommendations for codes and standards to be used for design and fabrication of high level waste canister

    International Nuclear Information System (INIS)

    Bermingham, A.J.; Booker, R.J.; Booth, H.R.; Ruehle, W.G.; Shevekov, S.; Silvester, A.G.; Tagart, S.W.; Thomas, J.A.; West, R.G.

    1978-01-01

    This study identifies codes, standards, and regulatory requirements for developing design criteria for high-level waste (HLW) canisters for commercial operation. It has been determined that the canister should be designed as a pressure vessel without provision for any overpressure protection type devices. It is recommended that the HLW canister be designed and fabricated to the requirements of the ASME Section III Code, Division 1 rules, for Code Class 3 components. Identification of other applicable industry and regulatory guides and standards are provided in this report. Requirements for the Design Specification are found in the ASME Section III Code. It is recommended that design verification be conducted principally with prototype testing which will encompass normal and accident service conditions during all phases of the canister life. Adequacy of existing quality assurance and licensing standards for the canister was investigated. One of the recommendations derived from this study is a requirement that the canister be N stamped. In addition, acceptance standards for the HLW waste should be established and the waste qualified to those standards before the canister is sealed. A preliminary investigation of use of an overpack for the canister has been made, and it is concluded that the use of an overpack, as an integral part of overall canister design, is undesirable, both from a design and economics standpoint. However, use of shipping cask liners and overpack type containers at the Federal repository may make the canister and HLW management safer and more cost effective. There are several possible concepts for canister closure design. These concepts can be adapted to the canister with or without an overpack. A remote seal weld closure is considered to be one of the most suitable closure methods; however, mechanical seals should also be investigated

  11. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  12. Development of a general coupling interface for the fuel performance code TRANSURANUS – Tested with the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Holt, L.; Rohde, U.; Seidl, M.; Schubert, A.; Van Uffelen, P.; Macián-Juan, R.

    2015-01-01

    Highlights: • A general coupling interface was developed for couplings of the TRANSURANUS code. • With this new tool simplified fuel behavior models in codes can be replaced. • Applicable e.g. for several reactor types and from normal operation up to DBA. • The general coupling interface was applied to the reactor dynamics code DYN3D. • The new coupled code system DYN3D–TRANSURANUS was successfully tested for RIA. - Abstract: A general interface is presented for coupling the TRANSURANUS fuel performance code with thermal hydraulics system, sub-channel thermal hydraulics, computational fluid dynamics (CFD) or reactor dynamics codes. As first application the reactor dynamics code DYN3D was coupled at assembly level in order to describe the fuel behavior in more detail. In the coupling, DYN3D provides process time, time-dependent rod power and thermal hydraulics conditions to TRANSURANUS, which in case of the two-way coupling approach transfers parameters like fuel temperature and cladding temperature back to DYN3D. Results of the coupled code system are presented for the reactivity transient scenario, initiated by control rod ejection. More precisely, the two-way coupling approach systematically calculates higher maximum values for the node fuel enthalpy. These differences can be explained thanks to the greater detail in fuel behavior modeling. The numerical performance for DYN3D–TRANSURANUS was proved to be fast and stable. The coupled code system can therefore improve the assessment of safety criteria, at a reasonable computational cost

  13. Some aspects of grading Java code submissions in MOOCs

    Directory of Open Access Journals (Sweden)

    Sándor Király

    2017-07-01

    Full Text Available Recently, massive open online courses (MOOCs have been offering a new online approach in the field of distance learning and online education. A typical MOOC course consists of video lectures, reading material and easily accessible tests for students. For a computer programming course, it is important to provide interactive, dynamic, online coding exercises and more complex programming assignments for learners. It is expedient for the students to receive prompt feedback on their coding submissions. Although MOOC automated programme evaluation subsystem is capable of assessing source programme files that are in learning management systems, in MOOC systems there is a grader that is responsible for evaluating students’ assignments with the result that course staff would be required to assess thousands of programmes submitted by the participants of the course without the benefit of an automatic grader. This paper presents a new concept for grading programming submissions of students and improved techniques based on the Java unit testing framework that enables automatic grading of code chunks. Some examples are also given such as the creation of unique exercises by dynamically generating the parameters of the assignment in a MOOC programming course combined with the kind of coding style recognition to teach coding standards.

  14. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  15. Typical load shapes for six categories of Swedish commercial buildings

    Energy Technology Data Exchange (ETDEWEB)

    Noren, C.

    1997-01-01

    In co-operation with several Swedish electricity suppliers, typical load shapes have been developed for six categories of commercial buildings located in the south of Sweden. The categories included in the study are: hotels, warehouses/grocery stores, schools with no kitchen, schools with kitchen, office buildings, health, health buildings. Load shapes are developed for different mean daily outdoor temperatures and for different day types, normally standard weekdays and standard weekends. The load shapes are presented as non-dimensional normalized 1-hour load. All measured loads for an object are divided by the object`s mean load during the measuring period and typical load shapes are developed for each category of buildings. Thus errors were kept lower as compared to use of W/m{sup 2}-terms. Typical daytime (9 a.m. - 5 p.m.) standard deviations are 7-10% of the mean values for standard weekdays but during very cold or warm weather conditions, single objects can deviate from the typical load shape. On weekends, errors are higher and depending on very different activity levels in the buildings, it is difficult to develop weekend load shapes with good accuracy. The method presented is very easy to use for similar studies and no building simulation programs are needed. If more load data is available, a good method to lower the errors is to make sure that every category only consists of objects with the same activity level, both on weekdays and weekends. To make it easier to use the load shapes, Excel load shape workbooks have been developed, where it is even possible to compare typical load shapes with measured data. 23 refs, 53 figs, 20 tabs

  16. Reduced-Rank Chip-Level MMSE Equalization for the 3G CDMA Forward Link with Code-Multiplexed Pilot

    Directory of Open Access Journals (Sweden)

    Goldstein J Scott

    2002-01-01

    Full Text Available This paper deals with synchronous direct-sequence code-division multiple access (CDMA transmission using orthogonal channel codes in frequency selective multipath, motivated by the forward link in 3G CDMA systems. The chip-level minimum mean square error (MMSE estimate of the (multiuser synchronous sum signal transmitted by the base, followed by a correlate and sum, has been shown to perform very well in saturated systems compared to a Rake receiver. In this paper, we present the reduced-rank, chip-level MMSE estimation based on the multistage nested Wiener filter (MSNWF. We show that, for the case of a known channel, only a small number of stages of the MSNWF is needed to achieve near full-rank MSE performance over a practical single-to-noise ratio (SNR range. This holds true even for an edge-of-cell scenario, where two base stations are contributing near equal-power signals, as well as for the single base station case. We then utilize the code-multiplexed pilot channel to train the MSNWF coefficients and show that adaptive MSNWF operating in a very low rank subspace performs slightly better than full-rank recursive least square (RLS and significantly better than least mean square (LMS. An important advantage of the MSNWF is that it can be implemented in a lattice structure, which involves significantly less computation than RLS. We also present structured MMSE equalizers that exploit the estimate of the multipath arrival times and the underlying channel structure to project the data vector onto a much lower dimensional subspace. Specifically, due to the sparseness of high-speed CDMA multipath channels, the channel vector lies in the subspace spanned by a small number of columns of the pulse shaping filter convolution matrix. We demonstrate that the performance of these structured low-rank equalizers is much superior to unstructured equalizers in terms of convergence speed and error rates.

  17. Spectral amplitude coding OCDMA using and subtraction technique.

    Science.gov (United States)

    Hasoon, Feras N; Aljunid, S A; Samad, M D A; Abdullah, Mohamad Khazani; Shaari, Sahbudin

    2008-03-20

    An optical decoding technique is proposed for a spectral-amplitude-coding-optical code division multiple access, namely, the AND subtraction technique. The theory is being elaborated and experimental results have been done by comparing a double-weight code against the existing code, Hadamard. We have proved that the and subtraction technique gives better bit error rate performance than the conventional complementary subtraction technique against the received power level.

  18. Structural reliability codes for probabilistic design

    DEFF Research Database (Denmark)

    Ditlevsen, Ove Dalager

    1997-01-01

    probabilistic code format has not only strong influence on the formal reliability measure, but also on the formal cost of failure to be associated if a design made to the target reliability level is considered to be optimal. In fact, the formal cost of failure can be different by several orders of size for two...... different, but by and large equally justifiable probabilistic code formats. Thus, the consequence is that a code format based on decision theoretical concepts and formulated as an extension of a probabilistic code format must specify formal values to be used as costs of failure. A principle of prudence...... is suggested for guiding the choice of the reference probabilistic code format for constant reliability. In the author's opinion there is an urgent need for establishing a standard probabilistic reliability code. This paper presents some considerations that may be debatable, but nevertheless point...

  19. Independent assessment of the TRAC-BD1/MOD1 computer code at the Idaho National Engineering Laboratory

    International Nuclear Information System (INIS)

    Wilson, G.E.; Charboneau, B.L.; Dallman, R.J.; Kullberg, C.M.; Wagner, K.C.; Wheatley, P.D.

    1984-01-01

    Under auspices of the United States Nuclear Regulatory Commission, their primary boiling water reactor safety analysis code (TRAC-BWR) is being assessed with simulations of a wide range of experimental data. The FY-1984 assessment activities were associated with the latest version (TRAC-BD1/MOD1) of this code. Typical results of the assessment studies are given. Conclusions formulated from these results are presented. These calculations relate to the overall applicability of the current code to safety analysis, and to future work which would further enhance the code's quality and ease of use

  20. POPFOOD - a computer code for calculating ingestion collective doses from continuous atmospheric releases

    International Nuclear Information System (INIS)

    Hotson, J.; Stacey, A.; Nair, S.

    1980-07-01

    The basic methodology incorporated in the POPFOOD computer code is described, which may be used to calculate equilibrium collective dose rates associated with continuous atmospheric releases and arising from consumption of a broad range of food products. The standard data libraries associated with the code are also described. These include a data library, based on the 1972 agricultural census, describing the spatial distribution of production, in England, Wales and Scotland, of the following food products: milk; beef and veal; pork bacon and ham; poultrymeat; eggs; mutton and lamb; root vegetables; green vegetables; fruit; cereals. Illustrative collective dose calculations were made for the case of 1 Ci per year emissions of 131 I, tritium and 14 C from a typical rural UK site. The calculations indicate that the ingestion pathway results in a greater collective dose than that via inhalation, with the contributions from consumption of root and green vegetables, and cereals being of comparable significance to that from liquid milk consumption, in all three cases. (author)

  1. An Adaptation of the HELIOS/MASTER Code System to the Analysis of VHTR Cores

    International Nuclear Information System (INIS)

    Noh, Jae Man; Lee, Hyun Chul; Kim, Kang Seog; Kim, Yong Hee

    2006-01-01

    KAERI is developing a new computer code system for an analysis of VHTR cores based on the existing HELIOS/MASTER code system which was originally developed for a LWR core analysis. In the VHTR reactor physics, there are several unique neutronic characteristics that cannot be handled easily by the conventional computer code system applied for the LWR core analysis. Typical examples of such characteristics are a double heterogeneity problem due to the particulate fuels, the effects of a spectrum shift and a thermal up-scattering due to the graphite moderator, and a strong fuel/reflector interaction, etc. In order to facilitate an easy treatment of such characteristics, we developed some methodologies for the HELIOS/MASTER code system and tested their applicability to the VHTR core analysis

  2. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  3. Computer codes for the calculation of vibrations in machines and structures

    International Nuclear Information System (INIS)

    1989-01-01

    After an introductory paper on the typical requirements to be met by vibration calculations, the first two sections of the conference papers present universal as well as specific finite-element codes tailored to solve individual problems. The calculation of dynamic processes increasingly now in addition to the finite elements applies the method of multi-component systems which takes into account rigid bodies or partial structures and linking and joining elements. This method, too, is explained referring to universal computer codes and to special versions. In mechanical engineering, rotary vibrations are a major problem, and under this topic, conference papers exclusively deal with codes that also take into account special effects such as electromechanical coupling, non-linearities in clutches, etc. (orig./HP) [de

  4. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    Science.gov (United States)

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  5. RIA Fuel Codes Benchmark - Volume 1

    International Nuclear Information System (INIS)

    Marchand, Olivier; Georgenthum, Vincent; Petit, Marc; Udagawa, Yutaka; Nagase, Fumihisa; Sugiyama, Tomoyuki; Arffman, Asko; Cherubini, Marco; Dostal, Martin; Klouzal, Jan; Geelhood, Kenneth; Gorzel, Andreas; Holt, Lars; Jernkvist, Lars Olof; Khvostov, Grigori; Maertens, Dietmar; Spykman, Gerold; Nakajima, Tetsuo; Nechaeva, Olga; Panka, Istvan; Rey Gayo, Jose M.; Sagrado Garcia, Inmaculada C.; Shin, An-Dong; Sonnenburg, Heinz Guenther; Umidova, Zeynab; Zhang, Jinzhao; Voglewede, John

    2013-01-01

    Reactivity-initiated accident (RIA) fuel rod codes have been developed for a significant period of time and they all have shown their ability to reproduce some experimental results with a certain degree of adequacy. However, they sometimes rely on different specific modelling assumptions the influence of which on the final results of the calculations is difficult to evaluate. The NEA Working Group on Fuel Safety (WGFS) is tasked with advancing the understanding of fuel safety issues by assessing the technical basis for current safety criteria and their applicability to high burnup and to new fuel designs and materials. The group aims at facilitating international convergence in this area, including the review of experimental approaches as well as the interpretation and use of experimental data relevant for safety. As a contribution to this task, WGFS conducted a RIA code benchmark based on RIA tests performed in the Nuclear Safety Research Reactor in Tokai, Japan and tests performed or planned in CABRI reactor in Cadarache, France. Emphasis was on assessment of different modelling options for RIA fuel rod codes in terms of reproducing experimental results as well as extrapolating to typical reactor conditions. This report provides a summary of the results of this task. (authors)

  6. Evaluating the benefits of commercial building energy codes and improving federal incentives for code adoption.

    Science.gov (United States)

    Gilbraith, Nathaniel; Azevedo, Inês L; Jaramillo, Paulina

    2014-12-16

    The federal government has the goal of decreasing commercial building energy consumption and pollutant emissions by incentivizing the adoption of commercial building energy codes. Quantitative estimates of code benefits at the state level that can inform the size and allocation of these incentives are not available. We estimate the state-level climate, environmental, and health benefits (i.e., social benefits) and reductions in energy bills (private benefits) of a more stringent code (ASHRAE 90.1-2010) relative to a baseline code (ASHRAE 90.1-2007). We find that reductions in site energy use intensity range from 93 MJ/m(2) of new construction per year (California) to 270 MJ/m(2) of new construction per year (North Dakota). Total annual benefits from more stringent codes total $506 million for all states, where $372 million are from reductions in energy bills, and $134 million are from social benefits. These total benefits range from $0.6 million in Wyoming to $49 million in Texas. Private benefits range from $0.38 per square meter in Washington State to $1.06 per square meter in New Hampshire. Social benefits range from $0.2 per square meter annually in California to $2.5 per square meter in Ohio. Reductions in human/environmental damages and future climate damages account for nearly equal shares of social benefits.

  7. Improvements to the COBRA-TF (EPRI) computer code for steam generator analysis. Final report

    International Nuclear Information System (INIS)

    Stewart, C.W.; Barnhart, J.S.; Koontz, A.S.

    1980-09-01

    The COBRA-TF (EPRI) code has been improved and extended for pressurized water reactor steam generator analysis. New features and models have been added in the areas of subcooled boiling and heat transfer, turbulence, numerics, and global steam generator modeling. The code's new capabilities are qualified against selected experimental data and demonstrated for typical global and microscale steam generator analysis

  8. Beacon: A three-dimensional structural analysis code for bowing history of fast breeder reactor cores

    International Nuclear Information System (INIS)

    Miki, K.

    1979-01-01

    The core elements of an LMFBR are bowed due to radial gradients of both temperature and neutron flux in the core. Since all hexagonal elements are multiply supported by adjacent elements or the restraint system, restraint forces and bending stresses are induced. In turn, these forces and stresses are relaxed by irradiation enhanced creep of the material. The analysis of the core bowing behavior requires a three-dimensional consideration of the mechanical interactions among the core elements, because the core consists of different kinds of elements and of fuel assemblies with various burnup histories. A new computational code BEACON has been developed for analyzing the bowing behavior of an LMFBR's core in three dimensions. To evaluate mechanical interactions among core elements, the code uses the analytical method of the earlier SHADOW code. BEACON analyzes the mechanical interactions in three directions, which form angles of 60 0 with one another. BEACON is applied to the 60 0 sector of a typical LMFBR's core for analyzing the bowing history during one equilibrium cycle. 120 core elements are treated, assuming the boundary condition of rotational symmetry. The application confirms that the code can be an effective tool for parametric studies as well as for detailed structural analysis of LMFBR's core. (orig.)

  9. OPAL reactor calculations using the Monte Carlo code serpent

    Energy Technology Data Exchange (ETDEWEB)

    Ferraro, Diego; Villarino, Eduardo [Nuclear Engineering Dept., INVAP S.E., Rio Negro (Argentina)

    2012-03-15

    In the present work the Monte Carlo cell code developed by VTT Serpent v1.1.14 is used to model the MTR fuel assemblies (FA) and control rods (CR) from OPAL (Open Pool Australian Light-water) reactor in order to obtain few-group constants with burnup dependence to be used in the already developed reactor core models. These core calculations are performed using CITVAP 3-D diffusion code, which is well-known reactor code based on CITATION. Subsequently the results are compared with those obtained by the deterministic calculation line used by INVAP, which uses the Collision Probability Condor cell-code to obtain few-group constants. Finally the results are compared with the experimental data obtained from the reactor information for several operation cycles. As a result several evaluations are performed, including a code to code cell comparison at cell and core level and calculation-experiment comparison at core level in order to evaluate the Serpent code actual capabilities. (author)

  10. Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.

    Science.gov (United States)

    Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic

    2017-03-01

    Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.

  11. A good performance watermarking LDPC code used in high-speed optical fiber communication system

    Science.gov (United States)

    Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue

    2015-07-01

    A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.

  12. Object-Oriented Parallel Particle-in-Cell Code for Beam Dynamics Simulation in Linear Accelerators

    International Nuclear Information System (INIS)

    Qiang, J.; Ryne, R.D.; Habib, S.; Decky, V.

    1999-01-01

    In this paper, we present an object-oriented three-dimensional parallel particle-in-cell code for beam dynamics simulation in linear accelerators. A two-dimensional parallel domain decomposition approach is employed within a message passing programming paradigm along with a dynamic load balancing. Implementing object-oriented software design provides the code with better maintainability, reusability, and extensibility compared with conventional structure based code. This also helps to encapsulate the details of communications syntax. Performance tests on SGI/Cray T3E-900 and SGI Origin 2000 machines show good scalability of the object-oriented code. Some important features of this code also include employing symplectic integration with linear maps of external focusing elements and using z as the independent variable, typical in accelerators. A successful application was done to simulate beam transport through three superconducting sections in the APT linac design

  13. Multimedia signal coding and transmission

    CERN Document Server

    Ohm, Jens-Rainer

    2015-01-01

    This textbook covers the theoretical background of one- and multidimensional signal processing, statistical analysis and modelling, coding and information theory with regard to the principles and design of image, video and audio compression systems. The theoretical concepts are augmented by practical examples of algorithms for multimedia signal coding technology, and related transmission aspects. On this basis, principles behind multimedia coding standards, including most recent developments like High Efficiency Video Coding, can be well understood. Furthermore, potential advances in future development are pointed out. Numerous figures and examples help to illustrate the concepts covered. The book was developed on the basis of a graduate-level university course, and most chapters are supplemented by exercises. The book is also a self-contained introduction both for researchers and developers of multimedia compression systems in industry.

  14. Description of the COMRADEX code

    International Nuclear Information System (INIS)

    Spangler, G.W.; Boling, M.; Rhoades, W.A.; Willis, C.A.

    1967-01-01

    The COMRADEX Code is discussed briefly and instructions are provided for the use of the code. The subject code was developed for calculating doses from hypothetical power reactor accidents. It permits the user to analyze four successive levels of containment with time-varying leak rates. Filtration, cleanup, fallout and plateout in each containment shell can also be analyzed. The doses calculated include the direct gamma dose from the containment building, the internal doses to as many as 14 organs including the thyroid, bone, lung, etc. from inhaling the contaminated air, and the external gamma doses from the cloud. While further improvements are needed, such as a provision for calculating doses from fallout, rainout and washout, the present code capabilities have a wide range of applicability for reactor accident analysis

  15. Cross-sectional association between ZIP code-level gentrification and homelessness among a large community-based sample of people who inject drugs in 19 US cities.

    Science.gov (United States)

    Linton, Sabriya L; Cooper, Hannah Lf; Kelley, Mary E; Karnes, Conny C; Ross, Zev; Wolfe, Mary E; Friedman, Samuel R; Jarlais, Don Des; Semaan, Salaam; Tempalski, Barbara; Sionean, Catlainn; DiNenno, Elizabeth; Wejnert, Cyprian; Paz-Bailey, Gabriela

    2017-06-20

    Housing instability has been associated with poor health outcomes among people who inject drugs (PWID). This study investigates the associations of local-level housing and economic conditions with homelessness among a large sample of PWID, which is an underexplored topic to date. PWID in this cross-sectional study were recruited from 19 large cities in the USA as part of National HIV Behavioral Surveillance. PWID provided self-reported information on demographics, behaviours and life events. Homelessness was defined as residing on the street, in a shelter, in a single room occupancy hotel, or in a car or temporarily residing with friends or relatives any time in the past year. Data on county-level rental housing unaffordability and demand for assisted housing units, and ZIP code-level gentrification (eg, index of percent increases in non-Hispanic white residents, household income, gross rent from 1990 to 2009) and economic deprivation were collected from the US Census Bureau and Department of Housing and Urban Development. Multilevel models evaluated the associations of local economic and housing characteristics with homelessness. Sixty percent (5394/8992) of the participants reported homelessness in the past year. The multivariable model demonstrated that PWID living in ZIP codes with higher levels of gentrification had higher odds of homelessness in the past year (gentrification: adjusted OR=1.11, 95% CI=1.04 to 1.17). Additional research is needed to determine the mechanisms through which gentrification increases homelessness among PWID to develop appropriate community-level interventions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Portable LQCD Monte Carlo code using OpenACC

    Science.gov (United States)

    Bonati, Claudio; Calore, Enrico; Coscetti, Simone; D'Elia, Massimo; Mesiti, Michele; Negro, Francesco; Fabio Schifano, Sebastiano; Silvi, Giorgio; Tripiccione, Raffaele

    2018-03-01

    Varying from multi-core CPU processors to many-core GPUs, the present scenario of HPC architectures is extremely heterogeneous. In this context, code portability is increasingly important for easy maintainability of applications; this is relevant in scientific computing where code changes are numerous and frequent. In this talk we present the design and optimization of a state-of-the-art production level LQCD Monte Carlo application, using the OpenACC directives model. OpenACC aims to abstract parallel programming to a descriptive level, where programmers do not need to specify the mapping of the code on the target machine. We describe the OpenACC implementation and show that the same code is able to target different architectures, including state-of-the-art CPUs and GPUs.

  17. Nuclear data for fusion: Validation of typical pre-processing methods for radiation transport calculations

    International Nuclear Information System (INIS)

    Hutton, T.; Sublet, J.C.; Morgan, L.; Leadbeater, T.W.

    2015-01-01

    Highlights: • We quantify the effect of processing nuclear data from ENDF to ACE format. • We consider the differences between fission and fusion angular distributions. • C-nat(n,el) at 2.0 MeV has a 0.6% deviation between original and processed data. • Fe-56(n,el) at 14.1 MeV has a 11.0% deviation between original and processed data. • Processed data do not accurately depict ENDF distributions for fusion energies. - Abstract: Nuclear data form the basis of the radiation transport codes used to design and simulate the behaviour of nuclear facilities, such as the ITER and DEMO fusion reactors. Typically these data and codes are biased towards fission and high-energy physics applications yet are still applied to fusion problems. With increasing interest in fusion applications, the lack of fusion specific codes and relevant data libraries is becoming increasingly apparent. Industry standard radiation transport codes require pre-processing of the evaluated data libraries prior to use in simulation. Historically these methods focus on speed of simulation at the cost of accurate data representation. For legacy applications this has not been a major concern, but current fusion needs differ significantly. Pre-processing reconstructs the differential and double differential interaction cross sections with a coarse binned structure, or more recently as a tabulated cumulative distribution function. This work looks at the validity of applying these processing methods to data used in fusion specific calculations in comparison to fission. The relative effects of applying this pre-processing mechanism, to both fission and fusion relevant reaction channels are demonstrated, and as such the poor representation of these distributions for the fusion energy regime. For the nat C(n,el) reaction at 2.0 MeV, the binned differential cross section deviates from the original data by 0.6% on average. For the 56 Fe(n,el) reaction at 14.1 MeV, the deviation increases to 11.0%. We

  18. QUIC: a chemical kinetics code for use with the chemical equilibrium code QUIL

    International Nuclear Information System (INIS)

    Lunsford, J.L.

    1977-10-01

    A chemical rate kinetics code QUIC is described, along with a support code RATE. QUIC is designed to allow chemical kinetics calculations on a wide variety of chemical environments while operating in the overlay environment of the chemical equilibrium code QUIL. QUIC depends upon a rate-data library called LIBR. This library is maintained by RATE. RATE enters into the library all reactions in a standardized format. The code QUIC, operating in conjunction with QUIL, is interactive and written to be used from a remote terminal, with paging control provided. Plotted output is also available

  19. Radioactive releases of nuclear power plants: the code ASTEC

    International Nuclear Information System (INIS)

    Sdouz, G.; Pachole, M.

    1999-11-01

    In order to adopt potential countermeasures to protect the population during the course of an accident in a nuclear power plant a fast prediction of the radiation exposure is necessary. The basic input value for such a dispersion calculation is the source term, which is the description of the physical and chemical behavior of the released radioactive nuclides. Based on a source term data base a pilot system has been developed to determine a relevant source term and to generate the input file for the dispersion code TAMOS of the Zentralanstalt fuer Meteorologie und Geodynamik (ZAMG). This file can be sent directly as an attachment of e-mail to the TAMOS user for further processing. The source terms for 56 European nuclear power plant units are included in the pilot version of the code ASTEC (Austrian Source Term Estimation Code). The use of the system is demonstrated in an example based on an accident in the unit TEMELIN-1. In order to calculate typical core inventories for the data bank the international computer code OBIGEN 2.1 was installed and applied. The report has been completed with a discussion on the optimal data transfer. (author)

  20. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  1. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  2. A two-locus global DNA barcode for land plants: the coding rbcL gene complements the non-coding trnH-psbA spacer region.

    Science.gov (United States)

    Kress, W John; Erickson, David L

    2007-06-06

    A useful DNA barcode requires sufficient sequence variation to distinguish between species and ease of application across a broad range of taxa. Discovery of a DNA barcode for land plants has been limited by intrinsically lower rates of sequence evolution in plant genomes than that observed in animals. This low rate has complicated the trade-off in finding a locus that is universal and readily sequenced and has sufficiently high sequence divergence at the species-level. Here, a global plant DNA barcode system is evaluated by comparing universal application and degree of sequence divergence for nine putative barcode loci, including coding and non-coding regions, singly and in pairs across a phylogenetically diverse set of 48 genera (two species per genus). No single locus could discriminate among species in a pair in more than 79% of genera, whereas discrimination increased to nearly 88% when the non-coding trnH-psbA spacer was paired with one of three coding loci, including rbcL. In silico trials were conducted in which DNA sequences from GenBank were used to further evaluate the discriminatory power of a subset of these loci. These trials supported the earlier observation that trnH-psbA coupled with rbcL can correctly identify and discriminate among related species. A combination of the non-coding trnH-psbA spacer region and a portion of the coding rbcL gene is recommended as a two-locus global land plant barcode that provides the necessary universality and species discrimination.

  3. FALCON Code Simulation for Verification of Fuel Preconditioning Guideline

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee-Hun; Kwon, Oh-Hyun; Kim, Hong-Jin; Kim, Yong-Hwan [KEPCO Nuclear Fuel Co. Ltd., Daejeon (Korea, Republic of)

    2015-10-15

    The magnitude and rate of power increases are key factors in the PCI failure process. KEPCO NF (KNF) provides operational restrictions called fuel preconditioning guideline (FPG) to mitigate PCI failures. The FPG contains recommended power maneuvering restrictions that should be followed when the KNF supplied fuel is being operated in-reactor. This guideline typically includes controlled power ramp rates, threshold power levels to initiate controlled ramp rates, and restrictions on the operating conditions that impact the potential for PCI failure. The purpose of the FPG is to allow time for stress relaxation to reduce cladding stress buildup during power maneuvers. Two general approaches have been adopted in the development of FPG to mitigate PCI failure in operating commercial reactors. The first approach relies primarily on past operational experience and power ramp test. The second one uses an analytical methodology where a figure-of-merit representative of PCI vulnerability, generally cladding hoop stress, is calculated using a fuel performance code. FALCON simulation can be the identification of a PCI limit parameter, typically cladding hoop stress, which can be used to evaluate a power maneuvering restriction on FPG. The PCI analysis is to assess the cladding hoop stress under various power ramp conditions. Startup ramp rate doesn't affect PCI failure until 50% of rated thermal power.

  4. Developing a computational tool for predicting physical parameters of a typical VVER-1000 core based on artificial neural network

    International Nuclear Information System (INIS)

    Mirvakili, S.M.; Faghihi, F.; Khalafi, H.

    2012-01-01

    Highlights: ► Thermal–hydraulics parameters of a VVER-1000 core based on neural network (ANN), are carried out. ► Required data for ANN training are found based on modified COBRA-EN code and then linked each other using MATLAB software. ► Based on ANN method, average and maximum temperature of fuel and clad as well as MDNBR of each FA are predicted. -- Abstract: The main goal of the present article is to design a computational tool to predict physical parameters of the VVER-1000 nuclear reactor core based on artificial neural network (ANN), taking into account a detailed physical model of the fuel rods and coolant channels in a fuel assembly. Predictions of thermal characteristics of fuel, clad and coolant are performed using cascade feed forward ANN based on linear fission power distribution and power peaking factors of FAs and hot channels factors (which are found based on our previous neutronic calculations). A software package has been developed to prepare the required data for ANN training which applies a modified COBRA-EN code for sub-channel analysis and links the codes using the MATLAB software. Based on the current estimation system, five main core TH parameters are predicted, which include the average and maximum temperatures of fuel and clad as well as the minimum departure from nucleate boiling ratio (MDNBR) for each FA. To get the best conditions for the considered ANNs training, a comprehensive sensitivity study has been performed to examine the effects of variation of hidden neurons, hidden layers, transfer functions, and the learning algorithms on the training and simulation results. Performance evaluation results show that the developed ANN can be trained to estimate the core TH parameters of a typical VVER-1000 reactor quickly without loss of accuracy.

  5. Aquelarre. A computer code for fast neutron cross sections from the statistical model

    International Nuclear Information System (INIS)

    Guasp, J.

    1974-01-01

    A Fortran V computer code for Univac 1108/6 using the partial statistical (or compound nucleus) model is described. The code calculates fast neutron cross sections for the (n, n'), (n, p), (n, d) and (n, α reactions and the angular distributions and Legendre moments.for the (n, n) and (n, n') processes in heavy and intermediate spherical nuclei. A local Optical Model with spin-orbit interaction for each level is employed, allowing for the width fluctuation and Moldauer corrections, as well as the inclusion of discrete and continuous levels. (Author) 67 refs

  6. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    International Nuclear Information System (INIS)

    Baratta, A.J.

    1997-01-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together

  7. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  8. Environmental remediation of high-level nuclear waste in geological repository. Modified computer code creates ultimate benchmark in natural systems

    International Nuclear Information System (INIS)

    Peter, Geoffrey J.

    2011-01-01

    Isolation of high-level nuclear waste in permanent geological repositories has been a major concern for over 30 years due to the migration of dissolved radio nuclides reaching the water table (10,000-year compliance period) as water moves through the repository and the surrounding area. Repositories based on mathematical models allow for long-term geological phenomena and involve many approximations; however, experimental verification of long-term processes is impossible. Countries must determine if geological disposal is adequate for permanent storage. Many countries have extensively studied different aspects of safely confining the highly radioactive waste in an underground repository based on the unique geological composition at their selected repository location. This paper discusses two computer codes developed by various countries to study the coupled thermal, mechanical, and chemical process in these environments, and the migration of radionuclide. Further, this paper presents the results of a case study of the Magma-hydrothermal (MH) computer code, modified by the author, applied to nuclear waste repository analysis. The MH code verified by simulating natural systems thus, creating the ultimate benchmark. This approach based on processes similar to those expected near waste repositories currently occurring in natural systems. (author)

  9. Shared temporoparietal dysfunction in dyslexia and typical readers with discrepantly high IQ.

    Science.gov (United States)

    Hancock, Roeland; Gabrieli, John D E; Hoeft, Fumiko

    2016-12-01

    It is currently believed that reading disability (RD) should be defined by reading level without regard to broader aptitude (IQ). There is debate, however, about how to classify individuals who read in the typical range but less well than would be expected by their higher IQ. We used functional magnetic resonance imaging (fMRI) in 49 children to examine whether those with typical, but discrepantly low reading ability relative to IQ, show dyslexia-like activation patterns during reading. Children who were typical readers with high-IQ discrepancy showed reduced activation in left temporoparietal neocortex relative to two control groups of typical readers without IQ discrepancy. This pattern was consistent and spatially overlapping with results in children with RD compared to typically reading children. The results suggest a shared neurological atypicality in regions associated with phonological processing between children with dyslexia and children with typical reading ability that is substantially below their IQ.

  10. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  11. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  12. NESTLE: A nodal kinetics code

    International Nuclear Information System (INIS)

    Al-Chalabi, R.M.; Turinsky, P.J.; Faure, F.-X.; Sarsour, H.N.; Engrand, P.R.

    1993-01-01

    The NESTLE nodal kinetics code has been developed for utilization as a stand-alone code for steady-state and transient reactor neutronic analysis and for incorporation into system transient codes, such as TRAC and RELAP. The latter is desirable to increase the simulation fidelity over that obtained from currently employed zero- and one-dimensional neutronic models and now feasible due to advances in computer performance and efficiency of nodal methods. As a stand-alone code, requirements are that it operate on a range of computing platforms from memory-limited personal computers (PCs) to supercomputers with vector processors. This paper summarizes the features of NESTLE that reflect the utilization and requirements just noted

  13. Fractal Image Coding Based on a Fitting Surface

    Directory of Open Access Journals (Sweden)

    Sheng Bi

    2014-01-01

    Full Text Available A no-search fractal image coding method based on a fitting surface is proposed. In our research, an improved gray-level transform with a fitting surface is introduced. One advantage of this method is that the fitting surface is used for both the range and domain blocks and one set of parameters can be saved. Another advantage is that the fitting surface can approximate the range and domain blocks better than the previous fitting planes; this can result in smaller block matching errors and better decoded image quality. Since the no-search and quadtree techniques are adopted, smaller matching errors also imply less number of blocks matching which results in a faster encoding process. Moreover, by combining all the fitting surfaces, a fitting surface image (FSI is also proposed to speed up the fractal decoding. Experiments show that our proposed method can yield superior performance over the other three methods. Relative to range-averaged image, FSI can provide faster fractal decoding process. Finally, by combining the proposed fractal coding method with JPEG, a hybrid coding method is designed which can provide higher PSNR than JPEG while maintaining the same Bpp.

  14. SHEAT: a computer code for probabilistic seismic hazard analysis, user's manual

    International Nuclear Information System (INIS)

    Ebisawa, Katsumi; Kondo, Masaaki; Abe, Kiyoharu; Tanaka, Toshiaki; Takani, Michio.

    1994-08-01

    The SHEAT code developed at Japan Atomic Energy Research Institute is for probabilistic seismic hazard analysis which is one of the tasks needed for seismic Probabilistic Safety Assessment (PSA) of a nuclear power plant. Seismic hazard is defined as an annual exceedance frequency of occurrence of earthquake ground motions at various levels of intensity at a given site. With the SHEAT code, seismic hazard is calculated by the following two steps: (1) Modeling of earthquake generation around a site. Future earthquake generation (locations, magnitudes and frequencies of postulated earthquakes) is modelled based on the historical earthquake records, active fault data and expert judgement. (2) Calculation of probabilistic seismic hazard at the site. An earthquake ground motion is calculated for each postulated earthquake using an attenuation model taking into account its standard deviation. Then the seismic hazard at the site is calculated by summing the frequencies of ground motions by all the earthquakes. This document is the user's manual of the SHEAT code. It includes: (1) Outlines of the code, which include overall concept, logical process, code structure, data file used and special characteristics of the code, (2) Functions of subprograms and analytical models in them, (3) Guidance of input and output data, and (4) Sample run results. The code has widely been used at JAERI to analyze seismic hazard at various nuclear power plant sites in japan. (author)

  15. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  16. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs.

  17. Guidelines for selecting codes for ground-water transport modeling of low-level waste burial sites. Volume 1. Guideline approach

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1985-05-01

    This document was written for the National Low-Level Waste Management Program to provide guidance for managers and site operators who need to select ground-water transport codes for assessing shallow-land burial site performance. The guidance given in this report also serves the needs of applications-oriented users who work under the direction of a manager or site operator. The guidelines are published in two volumes designed to support the needs of users having different technical backgrounds. An executive summary, published separately, gives managers and site operators an overview of the main guideline report. This volume includes specific recommendations for decision-making managers and site operators on how to use these guidelines. The more detailed discussions about the code selection approach are provided. 242 refs., 6 figs

  18. Content Layer progressive Coding of Digital Maps

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Ole Riis

    2002-01-01

    A new lossless context based method is presented for content progressive coding of limited bits/pixel images, such as maps, company logos, etc., common on the World Wide Web. Progressive encoding is achieved by encoding the image in content layers based on color level or other predefined...... information. Information from already coded layers are used when coding subsequent layers. This approach is combined with efficient template based context bilevel coding, context collapsing methods for multilevel images and arithmetic coding. Relative pixel patterns are used to collapse contexts. Expressions...... for calculating the resulting number of contexts are given. The new methods outperform existing schemes coding digital maps and in addition provide progressive coding. Compared to the state-of-the-art PWC coder, the compressed size is reduced to 50-70% on our layered map test images....

  19. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  20. Why comply with a code of ethics?

    Science.gov (United States)

    Spielthenner, Georg

    2015-05-01

    A growing number of professional associations and occupational groups are creating codes of ethics with the goal of guiding their members, protecting service users, and safeguarding the reputation of the profession. There is a great deal of literature dealing with the question to what extent ethical codes can achieve their desired objectives. The present paper does not contribute to this debate. Its aim is rather to investigate how rational it is to comply with codes of conduct. It is natural and virtually inevitable for a reflective person to ask why one should pay any attention to ethical codes, in particular if following a code is not in one's own interest. In order to achieve the aim of this paper, I shall (in "Quasi-reasons for complying with an ethical code" section) discuss reasons that only appear to be reasons for complying with a code. In "Code-independent reasons" section, I shall present genuine practical reasons that, however, turn out to be reasons of the wrong kind. In "Code-dependent reasons" section finally presents the most important reasons for complying with ethical codes. The paper argues that while ethical codes do not necessarily yield reasons for action, professionals can have genuine reasons for complying with a code, which may, however, be rather weak and easily overridden by reasons for deviating from the code.

  1. Is our Universe typical?

    International Nuclear Information System (INIS)

    Gurzadyan, V.G.

    1988-01-01

    The problem of typicalness of the Universe - as a dynamical system possessing both regular and chaotic regions of positive measure of phase space, is raised and discussed. Two dynamical systems are considered: 1) The observed Universe as a hierarchy of systems of N graviting bodies; 2) (3+1)-manifold with matter evolving to Wheeler-DeWitt equation in superspace with Hawking boundary condition of compact metrics. It is shown that the observed Universe is typical. There is no unambiguous answer for the second system yet. If it is typical too then the same present state of the Universe could have been originated from an infinite number of different initial conditions the restoration of which is practically impossible at present. 35 refs.; 2 refs

  2. Analysis of the sodium concrete interactions with the NABE code

    International Nuclear Information System (INIS)

    Soule, N.

    1989-01-01

    Experimental studies have been performed in France to investigate sodium-concrete interactions: thermal decomposition of concrete, specific chemical reactions, experimentation in liquid and vapour phase, sodium-concrete interaction without liner protection. Simultaneously computer codes have been developed in order to study the response of the containment building of a liquid metal fast breeder reactor to a sodium pool fire worsened by a sodium-concrete interaction: the NABE code. This code takes into account: a) sodium combustion; b) thermal decomposition of concrete with associated chemical reactions: (liquid sodium-vapour water reaction, liquid sodium-carbon dioxide reaction, liquid sodium-solid compounds of concrete, hydrogen combustion); c) chemical reactions in vapour phase; d) decay heat; e) gas aerosol inlets/outlets; f) aerosol behaviour (sedimentation, diffusion, leak); g) thermal exchanges. An example of a situation, typical of assessment of beyond design basis situations in LMFBR, is given. (author)

  3. KEWPIE2: A cascade code for the study of dynamical decay of excited nuclei

    Science.gov (United States)

    Lü, Hongliang; Marchix, Anthony; Abe, Yasuhisa; Boilley, David

    2016-03-01

    KEWPIE-a cascade code devoted to investigating the dynamical decay of excited nuclei, specially designed for treating very low probability events related to the synthesis of super-heavy nuclei formed in fusion-evaporation reactions-has been improved and rewritten in C++ programming language to become KEWPIE2. The current version of the code comprises various nuclear models concerning the light-particle emission, fission process and statistical properties of excited nuclei. General features of the code, such as the numerical scheme and the main physical ingredients, are described in detail. Some typical calculations having been performed in the present paper clearly show that theoretical predictions are generally in accordance with experimental data. Furthermore, since the values of some input parameters cannot be determined neither theoretically nor experimentally, a sensibility analysis is presented. To this end, we systematically investigate the effects of using different parameter values and reaction models on the final results. As expected, in the case of heavy nuclei, the fission process has the most crucial role to play in theoretical predictions. This work would be essential for numerical modeling of fusion-evaporation reactions.

  4. S values at voxels level for 188Re and 90Y calculated with the MCNP-4C code

    International Nuclear Information System (INIS)

    Coca, M.A.; Torres, L.A.; Cornejo, N.; Martin, G.

    2008-01-01

    Full text: MIRD formalism at voxel level has been suggested as an optional methodology to perform internal radiation dosimetry calculation during internal radiation therapy in Nuclear Medicine. Voxel S values for Y 90 , 131 I, 32 P, 99m Tc and 89 Sr have been published to different sizes. Currently, 188 Re has been proposed as a promising radionuclide for therapy due to its physical features and availability from generators. The main objective of this work was to estimate the voxel S values for 188 Re at cubical geometry using the MCNP-4C code for the simulations of radiation transport and energy deposition. Mean absorbed dose to target voxels per radioactive decay in a source voxel were estimated and reported for 188 Re and Y 90 . A comparison of voxel S values computed with the MCNP code and the data reported in MIRD Pamphlet 17 for 90 Y was performed in order to evaluate our results. (author)

  5. Evaluation of the plastic characteristics of piping products in relation to ASME code criteria

    International Nuclear Information System (INIS)

    Rodabaugh, E.C.; Moore, S.E.

    1978-07-01

    Theories and test data relevant to the plastic characteristics of piping products are presented and compared with Code Equations in NB-3652 for Class 1 piping; in NC/ND-3652.2 for Class 2 and Class 3 piping. Comparisons are made for (a) straight pipe, (b) elbows, (c) branch connections, and (d) tees. The status of data (or lack of data) for other piping components is discussed. Comparisons are made between available data and the Code equations for two typical piping materials, SA106 Grade B and SA312 TP304, for Code Design Limits, and Service Limits A, B, C, and D. Conditions under which the Code Limits cannot be shown to be conservative from available data are pointed out. Based on the results of the study, recommendations for Code revisions are presented, along with recommendations for additional work

  6. Depth-based Multi-View 3D Video Coding

    DEFF Research Database (Denmark)

    Zamarin, Marco

    techniques are used to extract dense motion information and generate improved candidate side information. Multiple candidates are merged employing multi-hypothesis strategies. Promising rate-distortion performance improvements compared with state-of-the-art Wyner-Ziv decoders are reported, both when texture......-view video. Depth maps are typically used to synthesize the desired output views, and the performance of view synthesis algorithms strongly depends on the accuracy of depth information. In this thesis, novel algorithms for efficient depth map compression in MVD scenarios are proposed, with particular focus...... on edge-preserving solutions. In a proposed scheme, texture-depth correlation is exploited to predict surface shapes in the depth signal. In this way depth coding performance can be improved in terms of both compression gain and edge-preservation. Another solution proposes a new intra coding mode targeted...

  7. Typicals/Típicos

    Directory of Open Access Journals (Sweden)

    Silvia Vélez

    2004-01-01

    Full Text Available Typicals is a series of 12 colour photographs digitally created from photojournalistic images from Colombia combined with "typical" craft textiles and text from guest writers. Typicals was first exhibited as photographs 50cm x 75cm in size, each with their own magnifying glass, at the Contemporary Art Space at Gorman House in Canberra, Australia, in 2000. It was then exhibited in "Feedback: Art Social Consciousness and Resistance" at Monash University Museum of Art in Melbourne, Australia, from March to May 2003. From May to June 2003 it was exhibited at the Museo de Arte de la Universidad Nacional de Colombia Santa Fé Bogotá, Colombia. In its current manifestation the artwork has been adapted from the catalogue of the museum exhibitions. It is broken up into eight pieces corresponding to the contributions of the writers. The introduction by Sylvia Vélez is the PDF file accessible via a link below this abstract. The other seven PDF files are accessible via the 'Supplementary Files' section to the left of your screen. Please note that these files are around 4 megabytes each, so it may be difficult to access them from a dial-up connection.

  8. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  9. Scaling gysela code beyond 32K-cores on bluegene/Q***

    Directory of Open Access Journals (Sweden)

    Bigot J.

    2013-12-01

    Full Text Available Gyrokinetic simulations lead to huge computational needs. Up to now, the semi- Lagrangian code Gysela performed large simulations using a few thousands cores (8k cores typically. Simulation with finer resolutions and with kinetic electrons are expected to increase those needs by a huge factor, providing a good example of applications requiring Exascale machines. This paper presents our work to improve Gysela in order to target an architecture that presents one possible way towards Exascale: the Blue Gene/Q. After analyzing the limitations of the code on this architecture, we have implemented three kinds of improvement: computational performance improvements, memory consumption improvements and disk i/o improvements. As a result, we show that the code now scales beyond 32k cores with much improved performances. This will make it possible to target the most powerful machines available and thus handle much larger physical cases.

  10. Structuring and coding in health care records: a qualitative analysis using diabetes as a case study.

    Science.gov (United States)

    Robertson, Ann R R; Fernando, Bernard; Morrison, Zoe; Kalra, Dipak; Sheikh, Aziz

    2015-03-27

    Globally, diabetes mellitus presents a substantial and increasing burden to individuals, health care systems and society. Structuring and coding of information in the electronic health record underpin attempts to improve sharing and searching for information. Digital records for those with long-term conditions are expected to bring direct and secondary uses benefits, and potentially to support patient self-management. We sought to investigate if how and why records for adults with diabetes were structured and coded and to explore a range of UK stakeholders' perceptions of current practice in the National Health Service. We carried out a qualitative, theoretically informed case study of documenting health care information for diabetes in family practice and hospital settings in England, using semi-structured interviews, observations, systems demonstrations and documentary data. We conducted 22 interviews and four on-site observations. With respect to secondary uses - research, audit, public health and service planning - interviewees clearly articulated the benefits of highly structured and coded diabetes data and it was believed that benefits would expand through linkage to other datasets. Direct, more marginal, clinical benefits in terms of managing and monitoring diabetes and perhaps encouraging patient self-management were also reported. We observed marked differences in levels of record structuring and/or coding between family practices, where it was high, and the hospital. We found little evidence that structured and coded data were being exploited to improve information sharing between care settings. Using high levels of data structuring and coding in records for diabetes patients has the potential to be exploited more fully, and lessons might be learned from successful developments elsewhere in the UK. A first step would be for hospitals to attain levels of health information technology infrastructure and systems use commensurate with family practices.

  11. Restructuring of burnup sensitivity analysis code system by using an object-oriented design approach

    International Nuclear Information System (INIS)

    Kenji, Yokoyama; Makoto, Ishikawa; Masahiro, Tatsumi; Hideaki, Hyoudou

    2005-01-01

    A new burnup sensitivity analysis code system was developed with help from the object-oriented technique and written in Python language. It was confirmed that they are powerful to support complex numerical calculation procedure such as reactor burnup sensitivity analysis. The new burnup sensitivity analysis code system PSAGEP was restructured from a complicated old code system and reborn as a user-friendly code system which can calculate the sensitivity coefficients of the nuclear characteristics considering multicycle burnup effect based on the generalized perturbation theory (GPT). A new encapsulation framework for conventional codes written in Fortran was developed. This framework supported to restructure the software architecture of the old code system by hiding implementation details and allowed users of the new code system to easily calculate the burnup sensitivity coefficients. The framework can be applied to the other development projects since it is carefully designed to be independent from PSAGEP. Numerical results of the burnup sensitivity coefficient of a typical fast breeder reactor were given with components based on GPT and the multicycle burnup effects on the sensitivity coefficient were discussed. (authors)

  12. QUIL: a chemical equilibrium code

    International Nuclear Information System (INIS)

    Lunsford, J.L.

    1977-02-01

    A chemical equilibrium code QUIL is described, along with two support codes FENG and SURF. QUIL is designed to allow calculations on a wide range of chemical environments, which may include surface phases. QUIL was written specifically to calculate distributions associated with complex equilibria involving fission products in the primary coolant loop of the high-temperature gas-cooled reactor. QUIL depends upon an energy-data library called ELIB. This library is maintained by FENG and SURF. FENG enters into the library all reactions having standard free energies of reaction that are independent of concentration. SURF enters all surface reactions into ELIB. All three codes are interactive codes written to be used from a remote terminal, with paging control provided. Plotted output is also available

  13. FLICA-4 (version 1). A computer code for three dimensional thermal analysis of nuclear reactor cores

    International Nuclear Information System (INIS)

    Raymond, P.; Allaire, G.; Boudsocq, G.; Caruge, D.; Gramont, T. de; Toumi, I.

    1995-01-01

    FLICA-4 is a thermal-hydraulic computer code, developed at the French Atomic Energy Commission (CEA) for three-dimensional steady-state or transient two-phase flow, and aimed at design and safety thermal analysis of nuclear reactor cores. It is available for various UNIX workstations and CRAY computers under UNICOS.It is based on four balance equations which include three balance equations for the mixture and a mass balance equation for the less concentrated phase which allows for the calculation of non equilibrium flows such as sub-cooled boiling and superheated steam. A drift velocity model takes into account the velocity unbalance between phases. The equations are solved using a finite volume numerical scheme. Typical running time, specific features (coupling with other codes) and auxiliary programs are presented. 1 tab., 9 refs

  14. Input data required for specific performance assessment codes

    International Nuclear Information System (INIS)

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  15. When sparse coding meets ranking: a joint framework for learning sparse codes and ranking scores

    KAUST Repository

    Wang, Jim Jing-Yan

    2017-06-28

    Sparse coding, which represents a data point as a sparse reconstruction code with regard to a dictionary, has been a popular data representation method. Meanwhile, in database retrieval problems, learning the ranking scores from data points plays an important role. Up to now, these two problems have always been considered separately, assuming that data coding and ranking are two independent and irrelevant problems. However, is there any internal relationship between sparse coding and ranking score learning? If yes, how to explore and make use of this internal relationship? In this paper, we try to answer these questions by developing the first joint sparse coding and ranking score learning algorithm. To explore the local distribution in the sparse code space, and also to bridge coding and ranking problems, we assume that in the neighborhood of each data point, the ranking scores can be approximated from the corresponding sparse codes by a local linear function. By considering the local approximation error of ranking scores, the reconstruction error and sparsity of sparse coding, and the query information provided by the user, we construct a unified objective function for learning of sparse codes, the dictionary and ranking scores. We further develop an iterative algorithm to solve this optimization problem.

  16. The sign rule and beyond: boundary effects, flexibility, and noise correlations in neural population codes.

    Directory of Open Access Journals (Sweden)

    Yu Hu

    2014-02-01

    Full Text Available Over repeat presentations of the same stimulus, sensory neurons show variable responses. This "noise" is typically correlated between pairs of cells, and a question with rich history in neuroscience is how these noise correlations impact the population's ability to encode the stimulus. Here, we consider a very general setting for population coding, investigating how information varies as a function of noise correlations, with all other aspects of the problem - neural tuning curves, etc. - held fixed. This work yields unifying insights into the role of noise correlations. These are summarized in the form of theorems, and illustrated with numerical examples involving neurons with diverse tuning curves. Our main contributions are as follows. (1 We generalize previous results to prove a sign rule (SR - if noise correlations between pairs of neurons have opposite signs vs. their signal correlations, then coding performance will improve compared to the independent case. This holds for three different metrics of coding performance, and for arbitrary tuning curves and levels of heterogeneity. This generality is true for our other results as well. (2 As also pointed out in the literature, the SR does not provide a necessary condition for good coding. We show that a diverse set of correlation structures can improve coding. Many of these violate the SR, as do experimentally observed correlations. There is structure to this diversity: we prove that the optimal correlation structures must lie on boundaries of the possible set of noise correlations. (3 We provide a novel set of necessary and sufficient conditions, under which the coding performance (in the presence of noise will be as good as it would be if there were no noise present at all.

  17. Governance codes: facts or fictions? a study of governance codes in colombia1,2

    Directory of Open Access Journals (Sweden)

    Julián Benavides Franco

    2010-10-01

    Full Text Available This article studies the effects on accounting performance and financing decisions of Colombian firms after issuing a corporate governance code. We assemble a database of Colombian issuers and test the hypotheses of improved performance and higher leverage after issuing a code. The results show that the firms’ return on assets after the code introduction improves in excess of 1%; the effect is amplified by the code quality. Additionally, the firms leverage increased, in excess of 5%, when the code quality was factored into the analysis. These results suggest that controlling parties commitment to self restrain, by reducing their private benefits and/or the expropriation of non controlling parties, through the code introduction, is indeed an effective measure and that the financial markets agree, increasing the supply of funds to the firms.

  18. Design validation of the ITER EC upper launcher according to codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Spaeh, Peter, E-mail: peter.spaeh@kit.edu [Karlsruhe Institute of Technology, Institute for Applied Materials, Association KIT-EURATOM, P.O. Box 3640, D-76021 Karlsruhe (Germany); Aiello, Gaetano [Karlsruhe Institute of Technology, Institute for Applied Materials, Association KIT-EURATOM, P.O. Box 3640, D-76021 Karlsruhe (Germany); Gagliardi, Mario [Karlsruhe Institute of Technology, Association KIT-EURATOM, P.O. Box 3640, D-76021 Karlsruhe (Germany); F4E, Fusion for Energy, Joint Undertaking, Barcelona (Spain); Grossetti, Giovanni; Meier, Andreas; Scherer, Theo; Schreck, Sabine; Strauss, Dirk; Vaccaro, Alessandro [Karlsruhe Institute of Technology, Institute for Applied Materials, Association KIT-EURATOM, P.O. Box 3640, D-76021 Karlsruhe (Germany); Weinhorst, Bastian [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Association KIT-EURATOM, P.O. Box 3640, D-76021 Karlsruhe (Germany)

    2015-10-15

    Highlights: • A set of applicable codes and standards has been chosen for the ITER EC upper launcher. • For a particular component load combinations, failure modes and stress categorizations have been determined. • The design validation was performed in accordance with the “design by analysis”-approach of the ASME boiler and pressure vessel code section III. - Abstract: The ITER electron cyclotron (EC) upper launcher has passed the CDR (conceptual design review) in 2005 and the PDR (preliminary design review) in 2009 and is in its final design phase now. The final design will be elaborated by the European consortium ECHUL-CA with contributions from several research institutes in Germany, Italy, the Netherlands and Switzerland. Within this consortium KIT is responsible for the design of the structural components (the upper port plug, UPP) and also the design integration of the launcher. As the selection of applicable codes and standards was under discussion for the past decade, the conceptual and the preliminary design of the launcher structure were not elaborated in straight accordance with a particular code but with a variety of well-acknowledged engineering practices. For the final design it is compulsory to validate the design with respect to a typical engineering code in order to be compliant with the ITER quality and nuclear requirements and to get acceptance from the French regulator. This paper presents typical design validation of the closure plate, which is the vacuum and Tritium barrier and thus a safety relevant component of the upper port plug (UPP), performed with the ASME boiler and pressure vessel code. Rationales for choosing this code are given as well as a comparison between different design methods, like the “design by rule” and the “design by analysis” approach. Also the selections of proper load specifications and the identification of potential failure modes are covered. In addition to that stress categorizations, analyses

  19. Design validation of the ITER EC upper launcher according to codes and standards

    International Nuclear Information System (INIS)

    Spaeh, Peter; Aiello, Gaetano; Gagliardi, Mario; Grossetti, Giovanni; Meier, Andreas; Scherer, Theo; Schreck, Sabine; Strauss, Dirk; Vaccaro, Alessandro; Weinhorst, Bastian

    2015-01-01

    Highlights: • A set of applicable codes and standards has been chosen for the ITER EC upper launcher. • For a particular component load combinations, failure modes and stress categorizations have been determined. • The design validation was performed in accordance with the “design by analysis”-approach of the ASME boiler and pressure vessel code section III. - Abstract: The ITER electron cyclotron (EC) upper launcher has passed the CDR (conceptual design review) in 2005 and the PDR (preliminary design review) in 2009 and is in its final design phase now. The final design will be elaborated by the European consortium ECHUL-CA with contributions from several research institutes in Germany, Italy, the Netherlands and Switzerland. Within this consortium KIT is responsible for the design of the structural components (the upper port plug, UPP) and also the design integration of the launcher. As the selection of applicable codes and standards was under discussion for the past decade, the conceptual and the preliminary design of the launcher structure were not elaborated in straight accordance with a particular code but with a variety of well-acknowledged engineering practices. For the final design it is compulsory to validate the design with respect to a typical engineering code in order to be compliant with the ITER quality and nuclear requirements and to get acceptance from the French regulator. This paper presents typical design validation of the closure plate, which is the vacuum and Tritium barrier and thus a safety relevant component of the upper port plug (UPP), performed with the ASME boiler and pressure vessel code. Rationales for choosing this code are given as well as a comparison between different design methods, like the “design by rule” and the “design by analysis” approach. Also the selections of proper load specifications and the identification of potential failure modes are covered. In addition to that stress categorizations, analyses

  20. The modeling of core melting and in-vessel corium relocation in the APRIL code

    Energy Technology Data Exchange (ETDEWEB)

    Kim. S.W.; Podowski, M.Z.; Lahey, R.T. [Rensselaer Polytechnic Institute, Troy, NY (United States)] [and others

    1995-09-01

    This paper is concerned with the modeling of severe accident phenomena in boiling water reactors (BWR). New models of core melting and in-vessel corium debris relocation are presented, developed for implementation in the APRIL computer code. The results of model testing and validations are given, including comparisons against available experimental data and parametric/sensitivity studies. Also, the application of these models, as parts of the APRIL code, is presented to simulate accident progression in a typical BWR reactor.

  1. Ethical codes. Fig leaf argument, ballast or cultural element for radiation protection?; Ethik-Codes. Feigenblatt, Ballast oder Kulturelement fuer den Strahlenschutz?

    Energy Technology Data Exchange (ETDEWEB)

    Gellermann, Rainer [Nuclear Control and Consulting GmbH, Braunschweig (Germany)

    2014-07-01

    The international association for radiation protection (IRPA) adopted in May 2004 a Code of Ethics in order to allow their members to hold an adequate professional level of ethical line of action. Based on this code of ethics the professional body of radiation protection (Fachverband fuer Strahlenschutz) has developed its own ethical code and adopted in 2005.

  2. An Agent-Based Model for Zip-Code Level Diffusion of Electric Vehicles and Electricity Consumption in New York City

    Directory of Open Access Journals (Sweden)

    Azadeh Ahkamiraad

    2018-03-01

    Full Text Available Current power grids in many countries are not fully prepared for high electric vehicle (EV penetration, and there is evidence that the construction of additional grid capacity is constantly outpaced by EV diffusion. If this situation continues, then it will compromise grid reliability and cause problems such as system overload, voltage and frequency fluctuations, and power losses. This is especially true for densely populated areas where the grid capacity is already strained with existing old infrastructure. The objective of this research is to identify the zip-code level electricity consumption that is associated with large-scale EV adoption in New York City, one of the most densely populated areas in the United States (U.S.. We fuse the Fisher and Pry diffusion model and Rogers model within the agent-based simulation to forecast zip-code level EV diffusion and the required energy capacity to satisfy the charging demand. The research outcomes will assist policy makers and grid operators in making better planning decisions on the locations and timing of investments during the transition to smarter grids and greener transportation.

  3. Calibration Methods for Reliability-Based Design Codes

    DEFF Research Database (Denmark)

    Gayton, N.; Mohamed, A.; Sørensen, John Dalsgaard

    2004-01-01

    The calibration methods are applied to define the optimal code format according to some target safety levels. The calibration procedure can be seen as a specific optimization process where the control variables are the partial factors of the code. Different methods are available in the literature...

  4. The reason for having a code of pharmaceutical ethics: Spanish Pharmacists Code of Ethics

    Directory of Open Access Journals (Sweden)

    Ana Mulet Alberola

    2017-05-01

    Full Text Available The pharmacist profession needs its own code of conduct set out in writing to serve as a stimulus to pharmacists in their day-to-day work in the different areas of pharmacy, in conjunction always with each individual pharmacist´s personal commitment to their patients, to other healthcare professionals and to society. An overview is provided of the different codes of ethics for pharmacists on the national and international scale, the most up-to-date code for 2015 being presented as a set of principles which must guide a pharmacutical conduct from the standpoint of deliberative judgment. The difference between codes of ethics and codes of practice is discussed. In the era of massive-scale collaboration, this code is a project holding bright prospects for the future. Each individual pharmacutical attitude in practicing their profession must be identified with the pursuit of excellence in their own personal practice for the purpose of achieving the ethical and professional values above and beyond complying with regulations and code of practice.

  5. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  6. Computer codes to assess risks from nuclear power plants with LWR's

    International Nuclear Information System (INIS)

    Alonso, A.; Blanco, J.; Francia, L.; Gallego, E.; Morales, L.; Ortega, P.; Torres, C.

    1986-01-01

    The codes used to quantify risks from nuclear power plants are described. For QRA level 1 (quantitative risk assessment) qualitative and quantitative codes are described. Codes to estimate uncertainties, importance and dependent failures are also included. For QRA-level 2, the most important codes dealing with thermohydraulics, molten core and aerosols behaviour are described. For QRA-level 3 the list includes integrated as well as separate models. Only light water reactors are considered. The presentation is general but the authors describe with more detail those codes they are more familiar with or the ones they have created through their research effort. (author)

  7. The Genomic Code: Genome Evolution and Potential Applications

    KAUST Repository

    Bernardi, Giorgio

    2016-01-25

    The genome of metazoans is organized according to a genomic code which comprises three laws: 1) Compositional correlations hold between contiguous coding and non-coding sequences, as well as among the three codon positions of protein-coding genes; these correlations are the consequence of the fact that the genomes under consideration consist of fairly homogeneous, long (≥200Kb) sequences, the isochores; 2) Although isochores are defined on the basis of purely compositional properties, GC levels of isochores are correlated with all tested structural and functional properties of the genome; 3) GC levels of isochores are correlated with chromosome architecture from interphase to metaphase; in the case of interphase the correlation concerns isochores and the three-dimensional “topological associated domains” (TADs); in the case of mitotic chromosomes, the correlation concerns isochores and chromosomal bands. Finally, the genomic code is the fourth and last pillar of molecular biology, the first three pillars being 1) the double helix structure of DNA; 2) the regulation of gene expression in prokaryotes; and 3) the genetic code.

  8. Coding and Billing in Surgical Education: A Systems-Based Practice Education Program.

    Science.gov (United States)

    Ghaderi, Kimeya F; Schmidt, Scott T; Drolet, Brian C

    Despite increased emphasis on systems-based practice through the Accreditation Council for Graduate Medical Education core competencies, few studies have examined what surgical residents know about coding and billing. We sought to create and measure the effectiveness of a multifaceted approach to improving resident knowledge and performance of documenting and coding outpatient encounters. We identified knowledge gaps and barriers to documentation and coding in the outpatient setting. We implemented a series of educational and workflow interventions with a group of 12 residents in a surgical clinic at a tertiary care center. To measure the effect of this program, we compared billing codes for 1 year before intervention (FY2012) to prospectively collected data from the postintervention period (FY2013). All related documentation and coding were verified by study-blinded auditors. Interventions took place at the outpatient surgical clinic at Rhode Island Hospital, a tertiary-care center. A cohort of 12 plastic surgery residents ranging from postgraduate year 2 through postgraduate year 6 participated in the interventional sequence. A total of 1285 patient encounters in the preintervention group were compared with 1170 encounters in the postintervention group. Using evaluation and management codes (E&M) as a measure of documentation and coding, we demonstrated a significant and durable increase in billing with supporting clinical documentation after the intervention. For established patient visits, the monthly average E&M code level increased from 2.14 to 3.05 (p coding and billing of outpatient clinic encounters. Using externally audited coding data, we demonstrate significantly increased rates of higher complexity E&M coding in a stable patient population based on improved documentation and billing awareness by the residents. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  9. LDPC coding for QKD at higher photon flux levels based on spatial entanglement of twin beams in PDC

    International Nuclear Information System (INIS)

    Daneshgaran, Fred; Mondin, Marina; Bari, Inam

    2014-01-01

    Twin beams generated by Parametric Down Conversion (PDC) exhibit quantum correlations that has been effectively used as a tool for many applications including calibration of single photon detectors. By now, detection of multi-mode spatial correlations is a mature field and in principle, only depends on the transmission and detection efficiency of the devices and the channel. In [2, 4, 5], the authors utilized their know-how on almost perfect selection of modes of pairwise correlated entangled beams and the optimization of the noise reduction to below the shot-noise level, for absolute calibration of Charge Coupled Device (CCD) cameras. The same basic principle is currently being considered by the same authors for possible use in Quantum Key Distribution (QKD) [3, 1]. The main advantage in such an approach would be the ability to work with much higher photon fluxes than that of a single photon regime that is theoretically required for discrete variable QKD applications (in practice, very weak laser pulses with mean photon count below one are used).The natural setup of quantization of CCD detection area and subsequent measurement of the correlation statistic needed to detect the presence of the eavesdropper Eve, leads to a QKD channel model that is a Discrete Memoryless Channel (DMC) with a number of inputs and outputs that can be more than two (i.e., the channel is a multi-level DMC). This paper investigates the use of Low Density Parity Check (LDPC) codes for information reconciliation on the effective parallel channels associated with the multi-level DMC. The performance of such codes are shown to be close to the theoretical limits.

  10. Coding with partially hidden Markov models

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Rissanen, J.

    1995-01-01

    Partially hidden Markov models (PHMM) are introduced. They are a variation of the hidden Markov models (HMM) combining the power of explicit conditioning on past observations and the power of using hidden states. (P)HMM may be combined with arithmetic coding for lossless data compression. A general...... 2-part coding scheme for given model order but unknown parameters based on PHMM is presented. A forward-backward reestimation of parameters with a redefined backward variable is given for these models and used for estimating the unknown parameters. Proof of convergence of this reestimation is given....... The PHMM structure and the conditions of the convergence proof allows for application of the PHMM to image coding. Relations between the PHMM and hidden Markov models (HMM) are treated. Results of coding bi-level images with the PHMM coding scheme is given. The results indicate that the PHMM can adapt...

  11. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider